Artificial intelligence has become the buzzword of the year. In 2023, OpenAI’s large language model (LLM), ChatGPT, made industry waves thanks to its wide-scale applications. Over the last year, organizations and consumers found innovative ways to utilize generative artificial intelligence (AI) to various degrees, from market content to digital art.
A recent Deloitte study examined the current benefits generative AI offered organizations and what specific boons industry leaders sought to gain upon implementing these applications into company workflows. Like previous reports studying the effects of digitalization on businesses, increased productivity from process optimization contributing to reduced capital spending and time was a highly desired benefit.
LLMs like ChatGPT are a new avenue to increase productivity for organizations further as these models can absorb tedious tasks, such as customer service, and diligently perform their role without much human supervision. Employees can oversee AI’s tasks, fixing occasional problems while focusing on new opportunities that might arise from AI insights, promoting rapid innovation.
As education around generative AI implementation rises and more industry leaders join the discussion, it has become a question of how AI can support an industry, not if or when. While the focus on AI has been chiefly dominant within general consumer-facing businesses–such as automotive and consumer electronics–its applications within high-reliability industries such as healthcare and defense are on the rise.
Now, AI within the medical and defense sectors is not new. The history of artificial intelligence in high-reliability sectors is well documented. Within the last decade, the medical industry alone has released many studies reporting the advantage of AI in various healthcare roles, such as diagnosing breast cancer from radiographs.
In military applications, the history is much of the same. However, using AI in defense is a much more sensitive topic due to the wide-ranging consequences of specific applications. These issues have recently come up due to OpenAI removing its ban on military use of ChatGPT and its other AI tools.
Since partnering with the U.S. Department of Defense (DoD) on artificial intelligence tools, specifically open-source cybersecurity, OpenAI has silently dropped its ban on military use. In early January, OpenAI changed its policy, removing specific mentions of the military, but has retained its language stating that it did not allow the use of its services to “harm yourself or others,” which includes “developing or [to] use weapons.”
In a discussion with CNBC, an OpenAI spokesperson clarified the goal of the policy change.
“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property,” the spokesperson said. “There are, however, national security use cases that align with our mission.”
OpenAI isn’t the first tech company that has opened itself to military contracts. In the same CNBC article, history shows that “workers at virtually every tech giant involved with military contracts have voiced concerns.” Google’s involvement in Project Maven and collaboration with the Israeli government caused protests.
Artificial intelligence within military applications opens many ethical, operational, and strategic risks. The most considerable concern with AI in defense industries typically revolves around the ethics of autonomous weapons. In a significant report conducted by the research organization RAND, the authors uncovered how continued integration of AI in military systems globally was likely.
While the public was generally positive in its support of AI for military use, the context of how it would be implemented was a high concern. Public support was determined by what the AI systems were used for and the level of human involvement in the development and control of each system. The public was found to be more favorable if autonomous weapons were used for self-defense over other scenarios. This data corroborates with the three significant risks the military must address for AI use, with ethical risks being the most important.
Paul Scharre, Vice President and Director of Studies at the Center for a New American Security, said in an interview with PBS that autonomous weapons raise legal, moral, and ethical questions concerning human control. While there are ongoing talks with 30 countries preferring preemptive, legally binding treaties to ban those specific weapons, none of the leading military powers of robotic developers are in this group.
Despite the lack of backing, that could change in a few years as broader concerns about AI, seen throughout 2023, draw more attention to regulation. Tech leaders, including Microsoft and OpenAI, want to work alongside governments to develop rules for AI use.
However, Scharre stated that the use of AI within the military has existed for many years. Many military applications that use AI are not for combat. Most militaries focus on logistics, personnel, and maintenance today.
“It’s moving people pulling things from one place to another,” Scharre said. “On a day-to-day basis, it looks a lot like what Walmart or Amazon do. It’s what happens at the end, it’s different.”
Scharre continued, “AI has advantages and all of those other non-combat functions that are critical how to how militaries operate. And if militaries can make their maintenance and logistics, and personnel and finance functions, just 10 percent better, that's going to have huge impacts for militaries.”
The most significant benefit AI helps the military leverage is rapid information processing, which benefits defense and civilian organizations. In an interview with Bloomberg, US Air Force Colonel Matthew Strohmeyer discussed the quick success of his data-based exercise using an LLM to perform a military task.
“It was highly successful. It was very fast,” he said. “We are learning that this is possible for us to do.”
The biggest benefit AI can offer its users, regardless of industry, is increased productivity from streamlined workflows, rapid data processing, and time efficiency. Through fast information processing, AI can enhance logistics, maintenance, and even finance functions within the military. With greater processing speed from AI, innovation can occur much quicker.
Thanks to the recent progress in natural language processing (NLP), which refers to the capability of communicating with machines through typical grammar and syntax rather than code, newer models have achieved higher levels of accuracy and fluency. Combined with significant breakthroughs in computer vision for image and video analysis, AI has now been able to perform more complex jobs.
Strohmeyer told Bloomberg that LLMs represent a major shift within the military where so little is digitized or connected. A single information request from a specific part of the military can take several staff hours or days to complete. In one of the tests in this area, the AI model completed the request within 10 minutes.
“That doesn't mean it's ready for primetime right now. But we just did it live. We did it with secret-level data,” Strohmeyer said. These models were trained with classified operational information to respond to sensitive questions. In the long term, this can rapidly speed up strategic decision-making by using LLMs to help plan for responses during escalating global crises.
This includes time-sensitive areas such as casualty care and evacuation, where strategic decision-making in high-stress situations could prevent excessive casualties. Like AI in medical facilities, AI can quickly produce warnings and suggestions for treatment based on data within an extensive medical trauma case library. While the decisions are ultimately left up to the human medics, diagnoses and treatment plans can happen at a much faster pace.
That said, military leadership will not be wholly dependent or reliant upon LLMs soon. Generative AI still has some kinks to work out, particularly with compound bias and concerns over hackability. These challenges will resolve with time, as generative AI and LLMs grow more accurate the more they are used and problems, such as AI hallucinations, are rectified.
Working alongside the creators of these LLMs, such as the U.S. military is doing with OpenAI, will help increase the efficacy of its cybersecurity. As with most organizations, the use of AI within military applications globally will only increase with time, not decrease. To remain competitive, collaborating with technical partners is necessary to establish a strong foundation for continual improvement.
While countries worldwide push for confidence-building and risk-reduction measures to decrease specific military AI use cases, there are far more pressing non-combat needs that AI would aid significantly, such as casualty care and evacuation. Rapid information processing and better data accuracy can lead to faster responses or, sometimes, de-escalation with more contextual information that may have been overlooked.
To accomplish these goals and implement further artificial intelligence applications, defense contractors need access to a secure supply of vital electronic components. Unfortunately, that can be a difficult task to achieve, as seen during the global semiconductor shortage. The Department of Defense (DoD) encountered issues obtaining enough components for military applications, which impacted U.S. operations in aiding Ukrainian war efforts. Obtaining a secure supply of these crucial components is important for aerospace and defense original equipment manufacturers (OEMs), contract manufacturers (CMs), and electronic manufacturing service (EMS) providers.
It is pertinent to utilize intelligent market monitoring to remain aware of disruptions such as part change notifications (PCNs) or end-of-life (EOL) announcements to ensure your organization always takes the appropriate steps to maintain a stable supply chain. With forewarning, companies can proactively purchase more stock, initiate redesigns around form-fit-function (FFF) alternates, or locate drop-in replacements (DIRs) before constraints occur.
This can be accomplished with digital tools such as the premier market intelligence tool for the electronic components supply chain, Datalynq, and the leading e-commerce site for electronic components, Sourcengine. Datalynq gives users unobscured visibility into the supply chain, alerting engineers to sole source parts, not recommended for new design (NFND) notices, inactivity, and more within their bill of materials (BOM).
Armed with this data, alongside possible alternates for a given component, users can manage cases around a risky component through Datalynq’s case management feature, which abides by the DoD’s DMSMS SD-22 guidelines. Once complete, users can then purchase the needed components or replacements through Sourcengine.
The shortages of tomorrow are bound to cause more strain on organizations. A stable supply chain is necessary for any organization today, especially those in high-reliability markets. Find out how Sourceability can help solve your supply chain problems by contacting one of our representatives.