Skip to content

Implementing AI Policies 

Implementing AI Policies 

Implementing AI Policies 
Implementing AI Bill of Rights

Implementing Artificial Intelligence (AI) policies is essential to protect businesses’ intellectual property and consumers from losing their privacy and prevent nefarious use against demographic segments and cybersecurity issues. 


What Are the Policy Measures the Government Is Implementing?

There are two main measures to ensure ethical principles apply to Artificial Intelligence (AI): 

  • AI Risk Management Framework.  
  • AI Bill of Rights. 

AI Risk Management Framework is a set of guidelines organizations must follow when developing, deploying, and managing solutions using AI. With so much talk about AI becoming sentient, it doesn’t help to title a bill, The AI Bill of Rights. The blueprint for the Bill of Rights is to define the rights and responsibilities of humans and machines to ensure ethical use.

What Are Some of the Concerns With AI?

Data collection using AI for discrimination or enforcing bias threatens people’s opportunities, violates privacy, and tracks their activities without permission. Will these bills make a difference if there are a lot of data brokers buying and selling information without prior knowledge or consent or sometimes creating documents that are difficult for the average person to understand?

Anyone in business understands how important it is for marketing to have data to create a segmented audience for products and services. As a past government employee, one of my biggest tasks was to find data to market government funds. The government collaborates with institutions, nonprofits, and other agencies when gathering data. Meaning it uses the data from its partners. Should a new consent be sent out when a new party obtains your information? Often, consent regarding data usage can be unclear and vague. Data brokers pay a high market value for your data and sell it to companies, leading to the use of manipulative language or all-encompassing consent requests. 

A White House Meeting Will Bring Together the Top AI CEOs

The White House has initiated a meeting with CEOs from tech companies ranging from Sam Altman, CEO of OpenAI; Dario Amodei, CEO of Anthropic; Satya Nadella, chairman and CEO of Microsoft; to Alphabet CEO Sundar Pichai. The only people missing are the actual people who it is affecting. There are no other attendees other than white house officials. 

Videos are circulating the internet with people asking AI questions and it responding with biases on religion, and policies, selectively omitting information when asked in a rephrased manner. There are also videos circulating using AI to mimic the voices of family members and extract private and sensitive information. With the TikTok Duet feature, which allows the creator to reuse another creator’s video side-by-side to dance or post-reactions, people have been using it to report the news or their experiences. Videos posted originally on TikTok become recycled on Instagram, YouTube shorts, and Facebook, spreading concerns faster than resolutions to the problem.

Companies’ Proprietary Information

Another issue is proprietary information for companies. Are companies creating protocols for using AI? Are employees aware of what is considered proprietary information? From Customer Service to top executives, what is your policy? How are you protecting your company from a communication crisis, legal issue, or cyber security issues? 

Creating a Positive Outcome
Creating a Positive Outcome
Creating a Positive Outcome

There are more questions than answers because AI is still developing into more progressive learning stages. There is a call to temporarily stop more releases of AI until the law and business catch up to privacy and security concerns.