White House, Big Tech Ink Commitments to Secure AI
With Big Tech companies pledging voluntary safeguards, industry-watchers assume that smaller AI purveyors will follow in their wake to make AI safer for all.
Seven leading tech companies — Google, Microsoft, Meta, Amazon, OpenAI, Anthropic, and Inflection — are meeting at the White House on July 21 to announce their voluntary commitments on sharing, testing, and developing safe and secure artificial intelligence (AI) technology.
The commitments that the tech companies have agreed on revolve around information sharing and testing, as well as transparency of the information they compile with the government and with the public. Examples of these would be commitments to protect privacy, to prevent bias and discrimination, and even to implement a watermark so that public users are aware of what content is created by AI.
And, next month Google and OpenAI will have their AI systems tested at the DEFCON hacking convention.
"US companies lead the world in innovation, and they have a responsibility to do that and continue to do that, but they have an equal responsibility to ensure that their products are safe, secure, and trustworthy," said Jeff Zients, White House chief of staff.
This turn in the global artificial intelligence conversation comes after an open letter from more than 1,000 top tech leaders made headlines by voicing their distress over the potential danger to humanity that AI poses.
The Biden Administration says that it is working with Congress to create legislation on AI that would provide safeguards and regulations to this kind of technology, as well as preparing executive actions that are soon to be announced. Listening sessions are also being held with union and civil rights leaders on the effects of AI and the way it influences their fields and their work — a key point of contention in the ongoing SAG-AFRTA writer's strike, for instance.
About the Author
You May Also Like