Dresner Group Blog

Our technology blogs feature IT tips and best practices for businesses in Columbia, Baltimore, Bel Air and in and about Maryland since 2002.

A Consortium of AI Companies Have Committed to Risk Reduction

A Consortium of AI Companies Have Committed to Risk Reduction

Back in July, the White House secured commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to help manage the risks that artificial intelligence potentially poses. More recently, eight more companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability—also pledged to maintain “the development of safe, secure, and trustworthy AI,” as a White House brief reported.

Let’s explore why this is so important, especially as AI continues to develop.

The Plan: AI-Generated Content Will Be Watermarked

As beneficial as artificial intelligence has proven to be, it has also proven to be a tool for cybercriminals and other threat actors to use to their advantage to great effect. From these tools being used to create deepfaked images to replicated voices scamming people out of thousands of dollars, there are countless ways that AI can potentially be weaponized by threat actors using legitimate tools.

This is why the Biden White House is pushing for these companies to create the technology needed to watermark AI content in such a way that the platform used to create it can be identified. The theory is that these watermarks would help prove whether an AI platform was involved in creating content, helping to spot potential threats and confirming that these platforms are being built and innovated upon to spot them more effectively.

In addition to the watermark, other safeguards have been agreed to by the technology firms:

  • Investments will be made into cybersecurity to protect the essential data that powers AI models
  • Independent experts will be charged with testing AI models before they’re released to ensure that major risks associated with AI are accounted for in their security
  • Research into the risks AI places to society at large, such as bias and inappropriate use, will be conducted and any identified instances will be flagged
  • Third parties will be more able to discover vulnerabilities and report them so they can be resolved
  • These firms and companies will also share all AI risk management-related data with others, as well as society at large and academia.
  • These firms have also committed to disclosing their security risks and the risks their products pose to society, including their bias.
  • These firms have also committed to creating AI that tackles some of society’s largest, most pressing issues.

Granted, these standards and practices aren’t enforceable by the government, but they serve as an invaluable first step towards more secure artificial intelligence.

We Can Help Secure Your Business Against Today’s Threats

We’ve long been committed to fulfilling business IT needs, particularly in regard to cybersecurity. Give us a call at (410) 531-6727 to find out what we can do for you and your operations.

Stay Informed

When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.

Professional Perspectives Can Transform Your Abili...
How to Properly Recycle Old Technology and Devices


No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Sunday, 10 December 2023

Captcha Image

Client Service Login

Latest News & Events

This tournament is scheduled to be held Friday, June 10, 2022. The past six years have all been sold-out and this year is shaping up to be another one for the books you won't want to miss. 

Contact Us

Learn more about what Dresner Group can do for your business.

Copyright Dresner Group. All Rights Reserved.