10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
Artificial Intelligence

Karine Perset helps governments understand AI


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Karine Perset works for the Organization for Economic Co-operation and Development (OECD), where she runs its AI Unit and oversees the OECD.AI Policy Observatory and the OECD.AI Networks of Experts within the Division for Digital Economy Policy.

Perset specializes in AI and public policy. She previously worked as an advisor to the Internet Corporation for Assigned Names and Numbers (ICANN)’s Governmental Advisory Committee and as Conssellor of the OECD’s Science, Technology, and Industry Director.

What work are you most proud of (in the AI field)?

I am extremely proud of the work we do at OECD.AI. Over the last few years, the demand for policy resources and guidance on trustworthy AI has really increased from both OECD member countries and also from AI ecosystem actors. 

When we started this work around 2016, there were only a handful of countries that had national AI initiatives. Fast forward to today, and the OECD.AI Policy Observatory – a one-stop shop for AI data and trends – documents over 1,000 AI initiatives across nearly 70 jurisdictions. 

Globally, all governments are facing the same questions on AI governance. We are all keenly aware of the need to strike a balance between enabling innovation and opportunities AI has to offer and mitigating the risks related to the misuse of the technology. I think the rise of generative AI in late 2022 has really put a spotlight on this. 

The ten OECD AI Principles from 2019 were quite prescient in the sense that they foresaw many key issues still salient today – 5 years later and with AI technology advancing considerably. The Principles serve as a guiding compass towards trustworthy AI that benefits people and the planet for governments in elaborating their AI policies. They place people at the center of AI development and deployment, which I think is something we can’t afford to lose sight of, no matter how advanced, impressive, and exciting AI capabilities become.  

To track progress on implementing the OECD AI Principles, we developed the OECD.AI Policy Observatory, a central hub for real-time or quasi-real-time AI data, analysis, and reports, which have become authoritative resources for many policymakers globally. But the OECD can’t do it alone, and multi-stakeholder collaboration has always been our approach. We created the OECD.AI Network of Experts – a network of more than 350 of the leading AI experts globally – to help tap their collective intelligence to inform policy analysis. The network is organized into six thematic expert groups, examining issues including AI risk and accountability, AI incidents, and the future of AI.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

When we look at the data, unfortunately, we still see a gender gap regarding who has the skills and resources to effectively leverage AI. In many countries, women still have less access to training, skills, and infrastructure for digital technologies. They are still underrepresented in AI R&D, while stereotypes and biases embedded in algorithms can prompt gender discrimination and limit women’s economic potential. In OECD countries, more than twice as many young men than women aged 16-24 can program, an essential skill for AI development. We clearly have more work to do to attract women to the AI field.

However, while the private sector AI technology world is highly male-dominated, I’d say that the AI policy world is a bit more balanced. For instance, my team at the OECD is close to gender parity. Many of the AI experts we work with are truly inspiring women, such as Elham Tabassi from the U.S National Institute of Standards and Technology (NIST); Francesca Rossi at IBM; Rebecca Finlay and Stephanie Ifayemi from the Partnership on AI; Lucilla Sioli, Irina Orssich, Tatjana Evas and Emilia Gomez from the European Commission; Clara Neppel from the IEEE; Nozha Boujemaa from Decathlon; Dunja Mladenic at the Slovenian JSI AI lab; and of course my own amazing boss and mentor Audrey Plonk, just to name a few, and there are so many more. 

We need women and diverse groups represented in the technology sector, academia, and civil society to bring rich and diverse perspectives. Unfortunately, in 2022, only one in four researchers publishing on AI worldwide was a woman. While the number of publications co-authored by at least one woman is increasing, women only contribute to about half of all AI publications compared to men, and the gap widens as the number of publications increases. All this to say, we need more representation from women and diverse groups in these spaces.

So to answer your question, how do I navigate the challenges of the male-dominated technology industry? I show up. I am very grateful that my position allows me to meet with experts, government officials, and corporate representatives and speak in international forums on AI governance. It allows me to engage in discussions, share my point of view, and challenge assumptions. And, of course, I let the data speak for itself.

What advice would you give to women seeking to enter the AI field?

Speaking from my experience in the AI policy world, I would say not to be afraid to speak up and share your perspective. We need more diverse voices around the table when we develop AI policies and AI models. We all have our unique stories and something different to bring to the conversation. 

To develop safer, more inclusive, and trustworthy AI, we must look at AI models and data input from different angles, asking ourselves: what are we missing? If you don’t speak up, then it might result in your team missing out on a really important insight. Chances are that, because you have a different perspective, you’ll see things that others do not, and as a global community, we can be greater than the sum of our parts if everyone contributes. 

I would also emphasize that there are many roles and paths in the AI field. A degree in computer science is not a prerequisite to work in AI. We already see jurists, economists, social scientists, and many more profiles bringing their perspectives to the table. As we move forward, true innovation will increasingly come from blending domain knowledge with AI literacy and technical competencies to come up with effective AI applications in specific domains. We see already that universities are offering AI courses beyond computer science departments. I truly believe interdisciplinarity will be key for AI careers. So, I would encourage women from all fields to consider what they can do with AI. And to not shy away for fear of being less competent than men.

What are some of the most pressing issues facing AI as it evolves?

I think the most pressing issues facing AI can be divided into three buckets.

First, I think we need to bridge the gap between policymakers and technologists. In late 2022, generative AI advances took many by surprise, despite some researchers anticipating such developments. Understandingly, each discipline is looking at AI issues from a unique angle. But AI issues are complex; collaboration and interdisciplinarity between policymakers, AI developers, and researchers are key to understanding AI issues in a holistic manner, helping keep pace with AI progress and close knowledge gaps.

Second, the international interoperability of AI rules is mission-critical to AI governance. Many large economies have started regulating AI. For instance, the European Union just agreed on its AI Act, the U.S. has adopted an executive order for the safe, secure, and trustworthy development and use of AI, and Brazil and Canada have introduced bills to regulate the development and deployment of AI. What’s challenging here is to strike the right balance between protecting citizens and enabling business innovations. AI knows no borders, and many of these economies have different approaches to regulation and protection; it will be crucial to enable interoperability between jurisdictions.

Third, there is the question of tracking AI incidents, which have increased rapidly with the rise of generative AI. Failure to address the risks associated with AI incidents could exacerbate the lack of trust in our societies. Importantly, data about past incidents can help us prevent similar incidents from happening in the future. Last year, we launched the AI Incidents Monitor. This tool uses global news sources to track AI incidents around the world to understand better the harms resulting from AI incidents. It provides real-time evidence to support policy and regulatory decisions about AI, especially for real risks such as bias, discrimination, and social disruption, and the types of AI systems that cause them.

What are some issues AI users should be aware of?

Something that policymakers globally are grappling with is how to protect citizens from AI-generated mis- and disinformation – such as synthetic media like deepfakes. Of course, mis- and disinformation has existed for some time, but what is different here is the scale, quality, and low cost of AI-generated synthetic outputs.

Governments are well aware of the issue and are looking at ways to help citizens identify AI-generated content and assess the veracity of the information they are consuming, but this is still an emerging field, and there is still no consensus on how to tackle such issues. 

Our AI Incidents Monitor can help track global trends and keep people informed about major cases of deepfakes and disinformation. But in the end, with the increasing volume of AI-generated content, people need to develop information literacy, sharpening their skills, reflexes, and ability to check reputable sources to assess information accuracy. 

What is the best way to responsibly build AI?

Many of us in the AI policy community are diligently working to find ways to build AI responsibly, acknowledging that determining the best approach often hinges on the specific context in which an AI system is deployed. Nonetheless, building AI responsibly necessitates careful consideration of ethical, social, and safety implications throughout the AI system lifecycle.

One of the OECD AI Principles refers to the accountability that AI actors bear for the proper functioning of the AI systems they develop and use. This means that AI actors must take measures to ensure that the AI systems they build are trustworthy. By this, I mean that they should benefit people and the planet, respect human rights, be fair, transparent, and explainable, and meet appropriate levels of robustness, security, and safety. To achieve this, actors must govern and manage risks throughout their AI systems’ lifecycle – from planning, design, and data collection and processing to model building, validation and deployment, operation, and monitoring.

Last year, we published a report on “Advancing Accountability in AI,” which provides an overview of integrating risk management frameworks and the AI system lifecycle to develop trustworthy AI. The report explores processes and technical attributes that can facilitate the implementation of values-based principles for trustworthy AI and identifies tools and mechanisms to define, assess, treat, and govern risks at each stage of the AI system lifecycle.

How can investors better push for responsible AI?

By advocating for responsible business conduct in the companies they invest in. Investors play a crucial role in shaping the development and deployment of AI technologies, and they should not underestimate their power to influence internal practices with the financial support they provide.

For example, the private sector can support developing and adopting responsible guidelines and standards for AI through initiatives such as the OECD’s Responsible Business Conduct (RBC) Guidelines, which we are currently tailoring specifically for AI. These guidelines will notably facilitate international compliance for AI companies selling their products and services across borders and enable transparency throughout the AI value chain – from suppliers to deployers to end-users. The RBC guidelines for AI will also provide a non-judiciary enforcement mechanism – in the form of national contact points tasked by national governments to mediate disputes – allowing users and affected stakeholders to seek remedies for AI-related harms.

By guiding companies to implement standards and guidelines for AI — like RBC – private sector partners can play a vital role in promoting trustworthy AI development and shaping the future of AI technologies in a way that benefits society as a whole.



Source link

by Team SNFYI

Facebook is testing a new feature that invites some users—mainly in the US and Canada—to let Meta AI access parts of their phone’s camera roll. This opt-in “cloud processing” option uploads recent photos and videos to Meta’s servers so the AI can offer personalized suggestions, such as creating collages, highlight reels, or themed memories like birthdays and graduations. It can also generate AI-based edits or restyles of those images. Meta says this is optional and assures users that the uploaded media won’t be used for advertising. However, to enable this, people must agree to let Meta analyze faces, objects, and metadata like time and location. Currently, the company claims these photos won’t be used to train its AI models—but they haven’t completely ruled that out for the future. Typically, only the last 30 days of photos get uploaded, though special or older images might stay on Meta’s servers longer for specific features. Users have the option to disable the feature anytime, which prompts Meta to delete the stored media after 30 days. Privacy experts are concerned that this expands Meta’s reach into private, unpublished images and could eventually feed future AI training. Unlike Google Photos, which explicitly states that user photos won’t train its AI, Meta hasn’t made that commitment yet. For now, this is still a test run for a limited group of people, but it highlights the tension between AI-powered personalization and the need to protect personal data.

by Team SNFYI

News Update Bymridul     |    March 14, 2024 Meesho, an online shopping platform based in Bengaluru, has announced its largest Employee Stock Ownership Plan (ESOP) buyback pool to date, totaling Rs 200 crore. This buyback initiative extends to both current and former employees, providing wealth creation opportunities for approximately 1,700 individuals. Ashish Kumar Singh, Meesho’s Chief Human Resources Officer, emphasized the company’s commitment to rewarding its teams, stating, “At Meesho, our employees are the driving force behind our success.” Singh further highlighted the company’s dedication to providing opportunities for wealth creation despite prevailing macroeconomic conditions. This marks the fourth wealth generation opportunity at Meesho, with the size of the buyback program increasing each year. In previous years, Meesho conducted buybacks worth over Rs 8.2 crore in February 2020, Rs 41.4 crore in November 2020, and Rs 45.5 crore in October 2021. Meesho’s profitability journey began in July 2023, making it the first horizontal Indian e-commerce company to achieve profitability. Despite turning profitable, Meesho continues to maintain positive cash flow and focuses on enhancing efficiencies across various cost items. The company’s revenue from operations for FY 2022-23 witnessed a remarkable growth of 77% over the previous year, amounting to Rs 5,735 crore. This growth can be attributed to Meesho’s leadership position as the most downloaded shopping app in India in both 2022 and 2023, increased transaction frequency among existing customers, and a diversified category mix. Additionally, Meesho’s focus on improving monetization through value-added seller services contributed to its revenue growth. Meesho also disclosed its audited performance for the first half of FY 2023-24, reporting consolidated revenues from operations of Rs 3,521 crore, marking a 37% year-over-year increase. The company achieved profitability in Q2 FY24, with a significant reduction in losses compared to the previous year. Furthermore, Meesho recorded impressive app download numbers, reaching 145 million downloads in India in 2023 and surpassing 500 million downloads in H1 FY 2023-24. Follow Startup Story Source link

by Team SNFYI

You might’ve heard of Grok, X’s answer to OpenAI’s ChatGPT. It’s a chatbot, and, in that sense, behaves as as you’d expect — answering questions about current events, pop culture and so on. But unlike other chatbots, Grok has “a bit of wit,” as X owner Elon Musk puts it, and “a rebellious streak.” Long story short, Grok is willing to speak to topics that are usually off limits to other chatbots, like polarizing political theories and conspiracies. And it’ll use less-than-polite language while doing so — for example, responding to the question “When is it appropriate to listen to Christmas music?” with “Whenever the hell you want.” But Grok’s ostensible biggest selling point is its ability to access real-time X data — an ability no other chatbots have, thanks to X’s decision to gatekeep that data. Ask it “What’s happening in AI today?” and Grok will piece together a response from very recent headlines, while ChatGPT, by contrast, will provide only vague answers that reflect the limits of its training data (and filters on its web access). Earlier this week, Musk pledged that he would open source Grok, without revealing precisely what that meant. So, you’re probably wondering: How does Grok work? What can it do? And how can I access it? You’ve come to the right place. We’ve put together this handy guide to help explain all things Grok. We’ll keep it up to date as Grok changes and evolves. How does Grok work? Grok is the invention of xAI, Elon Musk’s AI startup — a startup reportedly in the process of raising billions in venture capital. (Developing AI’s expensive.) Underpinning Grok is a generative AI model called Grok-1, developed over the course of months on a cluster of “tens of thousands” of GPUs (according to an xAI blog post). To train it, xAI sourced data both from the web (dated up to Q3 2023) and feedback from human assistants that xAI refers to as “AI tutors.” On popular benchmarks, Grok-1 is about as capable as Meta’s open source Llama 2 chatbot model and surpasses OpenAI’s GPT-3.5, xAI claims. Image Credits: xAI Human-guided feedback, or reinforcement learning from human feedback (RLHF), is the way most AI-powered chatbots are fine-tuned these days. RLHF involves training a generative model, then gathering additional information to train a “reward” model and fine-tuning the generative model with the reward model via reinforcement learning. RLHF is quite good at “teaching” models to follow instructions — but not perfect. Like other models, Grok is prone to hallucinating, sometimes offering misinformation and false timelines when asked about news. And these can be severe — like wrongly claiming that the Israel–Palestine conflict reached a ceasefire when it hadn’t. For questions that stretch beyond its knowledge base, Grok leverages “real-time access” to info on X (and from Tesla, according to Bloomberg). And, similar to ChatGPT, the model has internet browsing capabilities, enabling it to search the web for up-to-date information about topics. Musk has promised improvements with the …