This Week in AI: Can we (and could we ever) trust OpenAI? – TechCrunch

Posted: Published on June 2nd, 2024

This post was added by Dr Simmons

Keeping up with an industry as fast-moving asAIis a tall order. So until an AI can do it for you, heres a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didnt cover on their own.

By the way, TechCrunch plans to launch an AI newsletter on June 5.Stay tuned. In the meantime, were upping the cadence of our semiregular AI column, which was previously twice a month (or so), to weekly so be on the lookout for more editions.

This week in AI, OpenAI launched discounted plans for nonprofits and education customers and drew back the curtains on its most recent efforts to stop bad actors from abusing its AI tools. Theres not much to criticize, there at least not in this writers opinion. But I will say that the deluge of announcements seemed timed to counter the companys bad press as of late.

Lets start with Scarlett Johansson. OpenAI removed one of the voices used by its AI-powered chatbot ChatGPT after users pointed out that it sounded eerily similar to Johanssons. Johansson later released a statement saying that she hired legal counsel to inquire about the voice and get exact details about how it was developed and that shed refused repeated entreaties from OpenAI to license her voice for ChatGPT.

Now, a piece in The Washington Post implies that OpenAI didnt in fact seek to clone Johanssons voice and that any similarities were accidental. But why, then, did OpenAI CEO Sam Altman reach out to Johansson and urge her to reconsider two days before a splashy demo that featured the soundalike voice? Its a tad suspect.

Then theres OpenAIs trust and safety issues.

As we reported earlier in the month, OpenAIs since-dissolvedSuperalignment team, responsible for developing ways to govern and steer superintelligent AI systems, was promised 20% of the companys compute resources but only ever (and rarely) received a fraction of this.That (among other reasons) led to the resignation of the teams two co-leads, Jan Leike and Ilya Sutskever, formerly OpenAIs chief scientist.

Nearly a dozen safety experts have left OpenAI in the past year; several, including Leike, have publicly voiced concerns that the company is prioritizing commercial projects over safety and transparency efforts. In response to the criticism, OpenAI formed a new committee to oversee safety and security decisions related to the companys projects and operations. But it staffed the committee with company insiders including Altman rather than outside observers. This as OpenAI reportedly considers ditching its nonprofit structure in favor of a traditional for-profit model.

Incidents like these make it harder to trust OpenAI, a company whose power and influence grows daily (see: its deals with news publishers). Few corporations, if any, are worthy of trust. But OpenAIs market-disrupting technologies make the violations all the more troubling.

It doesnt help matters that Altman himself isnt exactly a beacon of truthfulness.

When news of OpenAIs aggressive tactics toward former employees broke tactics that entailed threatening employees with the loss of their vested equity, or the prevention of equity sales, if they didnt sign restrictive nondisclosure agreements Altman apologized and claimed he had no knowledge of the policies. But, according to Vox, Altmans signature is on the incorporation documents that enacted the policies.

And if former OpenAI board member Helen Toner is to be believed one of the ex-board members who attempted to remove Altman from his post late last year Altman has withheld information, misrepresented things that were happening at OpenAI and in some cases outright lied to the board. Toner says that the board learned of the release of ChatGPT through Twitter, not from Altman; that Altman gave wrong information about OpenAIs formal safety practices; and that Altman, displeased with an academic paper Toner co-authored that cast a critical light on OpenAI, tried to manipulate board members to push Toner off the board.

None of it bodes well.

Here are some other AI stories of note from the past few days:

See original here:

This Week in AI: Can we (and could we ever) trust OpenAI? - TechCrunch

Related Posts
This entry was posted in Ai. Bookmark the permalink.

Comments are closed.