Impressions from Risk AI - London: Humans have judgment, today’s LLMs are ‘distortions’… 💡Big companies “can ignore the stick”. Bringing ‘responsibility’ back to the place where it belongs.
Responsible AI = AI. AI doesn’t have ‘intent’. (Note: but it does carry the intent of its masters, in RLHF, system prompts, protocols and moderation). Licensing is the way to go, preferably at scale.
https://www.riskai.global/london-agenda
How to Build Trust in AI - Part 1 of 2
How to Build Trust in AI - Part 2 of 2
“Don’t be scared, but don’t be stupid” 😊 Understanding the AI Regulatory Landscape.
IBM watsonx about the challenges their clients face. Managing the Risks in Scaling AI
Licensing is the way to go, preferably at scale. Listening to Ed Zsyszkowski (Personal Digital Spaces) - Navigating Legal Minefields: AI Risks and Compliance in the Modern Age
Bringing ‘responsibility’ back to the place where it belongs (not all of it belongs with the AI user). Listening to Sarah Clarke - Building trust with stakeholders and society
Responsible AI = AI and AI by itself doesn’t have ‘intent’. (Note: but it does carry the intent of its masters, in RLHF, system prompts, protocols and moderation). Comments from Pauline Norstrom (Anekanta) and Daniel Hulme (WPP/Satalia) - Responsible AI: Unlocking Competitive Advantages
Yeah, we need systems to check on AI and big companies “can ignore the stick”, to them it’s just the cost of doing business… Listening to Lori Fena (Personal Digital Spaces) - Security and Privacy Risks for Generative AI
Humans have judgment…today’s LLMs are ‘distortions’… 💡- Human Agency and Oversight: Striking the Balance in AI Decision-Making