Amazon, Google, Meta, Microsoft and other firms agree to AI safeguards

&Tab;&Tab;<div class&equals;"wpcnt">&NewLine;&Tab;&Tab;&Tab;<div class&equals;"wpa">&NewLine;&Tab;&Tab;&Tab;&Tab;<span class&equals;"wpa-about">Advertisements<&sol;span>&NewLine;&Tab;&Tab;&Tab;&Tab;<div class&equals;"u top&lowbar;amp">&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;<amp-ad width&equals;"300" height&equals;"265"&NewLine;&Tab;&Tab; type&equals;"pubmine"&NewLine;&Tab;&Tab; data-siteid&equals;"111265417"&NewLine;&Tab;&Tab; data-section&equals;"1">&NewLine;&Tab;&Tab;<&sol;amp-ad>&NewLine;&Tab;&Tab;&Tab;&Tab;<&sol;div>&NewLine;&Tab;&Tab;&Tab;<&sol;div>&NewLine;&Tab;&Tab;<&sol;div><p>Amazon&comma; Google&comma; Meta&comma; Microsoft and other companies leading the development of artificial intelligence &lpar;AI&rpar; technology have agreed to meet a set of safeguards brokered by US President Joe Biden’s administration&period;<&sol;p>&NewLine;<p>The White House said it has secured voluntary commitments from seven American companies meant to ensure their AI products are safe before they release them&period;<&sol;p>&NewLine;<p>Some of the commitments call for third-party oversight of the workings of commercial AI systems&comma; though they do not detail who will audit the technology or hold the companies accountable&period;<&sol;p>&NewLine;<p>A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has elicited public fascination as well as concern about their ability to trick people and spread disinformation&comma; among other dangers&period;<&sol;p>&NewLine;<p>The four tech giants&comma; along with ChatGPT-maker OpenAI and start-ups Anthropic and Inflection&comma; have committed to security testing &OpenCurlyDoubleQuote;carried out in part by independent experts” to guard against major risks&comma; such as to biosecurity and cybersecurity&comma; the White House said in a statement&period;<&sol;p>&NewLine;<p>The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images known as deepfakes&period;<&sol;p>&NewLine;<p>They will also publicly report flaws and risks in their technology&comma; including effects on fairness and bias&comma; the White House said&period;<&sol;p>&NewLine;<p>The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get US congress to pass laws regulating the technology&period;<&sol;p>&NewLine;<p>Some advocates for AI regulations said Mr Biden’s move is a start&comma; but more needs to be done to hold the companies and their products accountable&period;<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough&comma;” said Amba Kak&comma; executive director of the AI Now Institute&period;<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;We need a much more wide-ranging public deliberation&comma; and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results&comma; ones that may more directly impact their business models&period;”<&sol;p>&NewLine;<p>James Steyer&comma; founder and CEO of the non-profit Common Sense Media&comma; said&colon; &OpenCurlyDoubleQuote;History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations&period;”<&sol;p>&NewLine;<p>US senate majority leader Chuck Schumer has said he will introduce legislation to regulate AI&period;<&sol;p>&NewLine;<p>He has held a number of briefings with government officials to educate senators about an issue that has attracted bipartisan interest&period;<&sol;p>&NewLine;<p>A number of technology executives have called for regulation&comma; and several went to the White House in May to speak with Mr Biden&comma; vice president Kamala Harris and other officials&period;<&sol;p>&NewLine;<p>But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI&comma; Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures&period;<&sol;p>&NewLine;<p>The software trade group BSA&comma; which includes Microsoft as a member&comma; said it welcomed the Biden administration’s efforts to set rules for high-risk AI systems&period;<&sol;p>&NewLine;<p>&OpenCurlyDoubleQuote;Enterprise software companies look forward to working with the administration and congress to enact legislation that addresses the risks associated with artificial intelligence and promote its benefits&comma;” the group said in a statement&period;<&sol;p>&NewLine;<p>A number of countries have been looking at ways to regulate AI&comma; including European Union legislators who have been negotiating sweeping AI rules for the 27-nation bloc&period;<&sol;p>&NewLine;<p>UN Secretary-General Antonio Guterres recently said the United Nations is &OpenCurlyDoubleQuote;the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year&period;<&sol;p>&NewLine;<p>The United Nations chief also said he welcomed calls from some countries for the creation of a new UN body to support global efforts to govern AI&comma; inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change&period;<&sol;p>&NewLine;<p>The White House said it has already consulted on the voluntary commitments with a number of countries&period;<&sol;p>&NewLine;<p>Microsoft president Brad Smith said in a blog post on Friday that his company is making some commitments that go beyond the White House pledge&comma; including support for regulation that would create a &OpenCurlyDoubleQuote;licensing regime for highly capable models”&period;<&sol;p>&NewLine;&Tab;&Tab;&Tab;<div style&equals;"padding-bottom&colon;15px&semi;" class&equals;"wordads-tag" data-slot-type&equals;"belowpost">&NewLine;&Tab;&Tab;&Tab;&Tab;<div id&equals;"atatags-dynamic-belowpost-68cd2e57cd54c">&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;<script type&equals;"text&sol;javascript">&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;window&period;getAdSnippetCallback &equals; function &lpar;&rpar; &lbrace;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;if &lpar; false &equals;&equals;&equals; &lpar; window&period;isWatlV1 &quest;&quest; false &rpar; &rpar; &lbrace;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&sol;&sol; Use Aditude scripts&period;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;window&period;tudeMappings &equals; window&period;tudeMappings &vert;&vert; &lbrack;&rsqb;&semi;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;window&period;tudeMappings&period;push&lpar; &lbrace;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;divId&colon; 'atatags-dynamic-belowpost-68cd2e57cd54c'&comma;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;format&colon; 'belowpost'&comma;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&rcub; &rpar;&semi;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&rcub;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&rcub;&NewLine;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;if &lpar; document&period;readyState &equals;&equals;&equals; 'loading' &rpar; &lbrace;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;document&period;addEventListener&lpar; 'DOMContentLoaded'&comma; window&period;getAdSnippetCallback &rpar;&semi;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&rcub; else &lbrace;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;window&period;getAdSnippetCallback&lpar;&rpar;&semi;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;&Tab;&rcub;&NewLine;&Tab;&Tab;&Tab;&Tab;&Tab;<&sol;script>&NewLine;&Tab;&Tab;&Tab;&Tab;<&sol;div>&NewLine;&Tab;&Tab;&Tab;<&sol;div>


Discover more from London Glossy Post

Subscribe to get the latest posts sent to your email.

- Advertisement -
Exit mobile version