The advancement of AI technology has undergone a remarkable evolution in recent years, presenting to the world groundbreaking innovations. However, this progress must be accompanied with public and government oversight. Establishing regulatory frameworks becomes paramount in striking a delicate balance: setting necessary boundaries without stifling the developers creativity.

US regulation for using AI technology - basic information

At the moment, the US government only started considering the idea of preparing such regulations. Beside that, The White House recommended federal agencies put under control everything that is related to AI technology for government purposes. They will be in charge of:

  • undertaking risk assessments
  • implementing concrete safeguards
  • detailed public disclosures whenever US government using AI technology
  • designating chief AI officers

Unlike the European government, the United States decided to omit the process of preparing an official document, and focus on assigning specific tasks to the federals.

Additionally, the US government plans to hire about 100 AI professionals that would promote the safety tutorial of using AI.

Key points for US regulation preparation

In January 2023, NIST, an agency of the US Department of Commerce voluntarily published the AI standards document, but they emphasized the fact that the document is non-obligatory. But it could be a source of useful information about developing/using/designing AI products with ethical and awareness standards. The similar document has also been published by Section 5301.

As all of the mentioned institutes are currently cooperating with the US government, there is a high probability that in the final US regulation for AI technology will be included statements from the above documents as well.

What should be included in these document?

Based on already published in US risk assessment/case-studies/practical advices for AI technology, US regulation definitely may contain information about:

  1. Safety requirements:real-time monitoring, backstops, or other intervention of the AI system to prevent endangerment of human life, health, or property
  2. Security assessment: protocols to avoid, protect against, or respond to attacks against the AI system
  3. Tutorials/Case-study: understanding and properly using the mechanisms of an AI system
  4. Privacy: safeguarding human autonomy by protecting anonymity, confidentiality, and control
  5. Open-source data: accessible information about the AI system
  6. Validation system: demonstrating through ongoing testing or monitoring to confirm the AI system performs as intended
  7. Ethical statements: developing AI technology may be compatible with the basis of human law and data protection
  8. To do list for start-up beginners, which may include:
    • sufficient knowledge about an AI system
    • information about how to implement testing, evaluations, verifications, and validation processes to inform management decisions
    • suggestion for developing an organizational culture that incorporates AI risk management
    • risk assessments

As the US market is undoubtedly a leader among AI technology development, such regulation should be conducted as soon as possible. That would definitely help to prevent the growing list of AI dangers for upcoming years.

Author

Julia Mykhailiuk

Marketing Specialist