Key preferences for secure and responsible

Key preferences for secure and responsible

(Source: video flu/shutter stock)

Artificial intelligence (AI) is rapidly being integrated into public sector operations. In just 2024, federal agencies reported more than 1,700 AI use cases, which is doubled from last year. Half of these departments, with sensitive national missions such as health care, experienced services and management of homeland security, the need to secure the AI ​​system in the government is urgent and complex. Success relys on the approach from the end to the end to tackle the risks, maintain compliance and build systems that are explainable and flexible.

Priority to trust and accountability

One of the fundamental challenges to obtain AI in the public sector is to visit the rules and regulations of the rule of governance. Even in the absence of any clean Federal AI legislation, existing data protection laws and specific rules regarding sectors already inform how AI should be ruled. Agencies have to ensure their AI system responsible and according to the standard of moral use, including confidentiality, transparency, prejudice and monitoring.

Regulators do not distinguish the mistakes made by humans or algorithms. Its effects are the same, and the potential costs of non -compliance, especially on a scale, can be important. Against this background, transparency and explanation are essential. Particularly in high -risk scenarios, AI model behaviors and recommendations can have life or death implications. So it is important to understand how and why such models are making their decisions.

Therefore, the responsible AI governance should be rooted in a multi -sophisticated framework that includes moral standards, legal compliance, human surveillance and stability in the AI ​​life cycle. The system should be designed with tools and processes that enable developers, operators and surveillance organizations to detect decisions and identify model behaviors. The logic behind the automatic output must be clear and evaluated. Otherwise, the public sector IT teams audit the AI ​​-powered decisions, assessing justice, or keeping them accountable when they face failures.

Save data in AI Life Cycle

Data is the basis of all AI models, and in every stage of storage of security, it is necessary in transit and use. This is especially important for public agencies, which often work with the most sensitive data from citizens’ records to national intelligence. Such statistics require layered defense that solve both the traditional threats of cybersonicity and the emerging risks of the CyberScure, which are unique to AI.

At the storage level, datases should be protected from unauthorized access and tampering. And when the data is transmitted, whether it is through the Netwish networks or satellite communication, it must be encrypted using modern and preferred quantum level standards. Once the data is used, establishing a safe environment for computing can help prevent memory level attacks, which completely work in a process or system of memory, ignoring traditional safety measures. All these layers of protection are even more important in the AI ​​era, as the AI ​​system makes more contacts in the organization in the organization than traditional computer programs.

Although AI is running more sophisticated risks, such as social engineering attacks that employ deep faxes and other artificial materials, the good news is that AI can also strengthen advanced safety, such as detecting analysis of behavior and irregularities that play a role in dealing with such risks. At the same time, basic cyber hygiene methods such as strong access control, multi -factor verification and regular audit are also essential for the cyberciction of the public sector.

Secure operational performance through smart planning

Beyond transparency and cyber protection, securing the public sector AI, including maintaining basic operational integrity and performance, and means managing costs. Sophisticated AI models needed resources needed to operate and run include energy -related computing, major datases and special skills. These advanced needs can increase the strict government budget, but can help handle smart planning costs.

For example, mutual cooperation platforms between agencies for fraud detection or other shared challenges can prevent duplication and promote more efficient use of resources for shared issues that suffer from many areas of the government. Another approach is to use retrieval generation (RAG), data compression algorithm and other advanced techniques to enable the use of small models while maintaining the high accuracy of the AI ​​platform. This can reduce dependence on heavy systems with resources and more precise, specific mission applications can be supported, which are in accordance with budget and policy barriers.

(Harsamado/Shutter stock)

At the infrastructure level, agencies should consider the cloud platform that offers an alternative to expensive on -premises system through expanding computations and storage, better safety features and smooth administration. Completing the image for safe and costly public sector AI is the Smartwork Force planning. This includes automating works to free employees for more strategic responsibilities, and reducing dependence on external advisers and creating internal skills through target training programs to improve their home projects.

Since the public sector is continuing to develop in both the AI ​​scale and the influence, the elections today will form the safety, confidence and effectiveness of these systems in the coming years. Agencies have to understand how the AI ​​gets in its unique business context and threats, and that they need to actively harmony with teams to ensure alignment with both strategic goals and security needs. Finally, to secure AI in these environments require an active, end -to -end approach that embeds security, privacy, fair and performance in the AI ​​life cycle.

About the writer

Bernie Legate has been in the high -tech industry for 20+ years, which has been spending 15 of them in the fields of communication and networking. Bernie was formally joined Intel in November 2016 and is now serving as the director of IOT cells, artificial intelligence in Intel. Bernie has played a key role in serving as a member of the AI ​​”Core Team” in Intel’s AI Initiative, which helped Intel explain the Age AI sales strategy, AI organizational support structures, and internal processes, including input of product development. Bernie organized a BS in electrical engineering from Michigan University, which is an MBA from Babson College (Management/Finance), a certificate of ‘behavior economics’ from Harvard Business School, and Gordon is also a Master of Divinity (MDI) from the Thalogue Madrassa. In addition, Bernie is an active partner in the commitment of Intel’s diversity and involvement by helping in recruiting and maintaining a minority of minority college. Bernie has married his beautiful wife Michelle and has 3 children. In her spare time, Bernie enjoy reading, exercise, business personality and community service.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *