10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
Artificial Intelligence

Mutale Nkonde’s nonprofit is working to make AI less biased


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Mutale Nkonde is the founding CEO of the nonprofit AI For the People (AFP), which seeks to increase the amount of Black voices in tech. Before this, she helped introduce the Algorithmic and Deep Fakes Algorithmic Acts, in addition to the No Biometric Barriers to Housing Act, to the US House of Representatives. She is currently a Visiting Policy Fellow at the Oxford Internet Institute.

Briefly, how did you get your start in AI? What attracted you to the field?

I started to become curious about how social media worked after a friend of mine posted that Google Pictures, the precursor to Google Image, labeled two Black people as gorillas in 2015. I was involved with a lot of “Blacks in tech” circles, and we were outraged, but I did not begin to understand this was because of algorithmic bias until the publication of Weapons of Math Destruction in 2016. This inspired me to start applying for fellowships where I could study this further and ended with my role as a co author of a report called o Advancing Racial Literacy in Tech, which was published in 2019. This was noticed by folks at the McArthur Foundation and kick-started the current leg of my career.

I was attracted to questions about racism and technology because they seemed under-researched and counterintuitive. I like to do things other people do not, so learning more and disseminating this information within Silicon Valley seemed like a lot of fun. Since Advancing Racial Literacy in Tech. I have started a nonprofit called AI for the People that focuses on advocating for policies and practices to reduce the expression of Algorithmic Bias.

What work are you most proud of (in the AI field)?

I am really proud of being the leading advocate of the Algorithmic Accountability Act, which was first introduced to the House of Representatives in 2019. It established AI for the People as a key thought leader around how to develop protocols to guide the design, deployment, and governance of AI systems that comply with local nondiscrimination laws. This has led to us being included in the Schumer AI Insights Channels as part of an advisory group for various federal agencies and some exciting upcoming work on the Hill.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

I have actually had more issues with academic gatekeepers. Most of the men I work with in tech companies have been charged with developing systems for use on Black and other nonwhite populations, and so they have been very easy to work with. Principally because I am acting as an external expert who can either validate or challenge existing practices.

What advice would you give to women seeking to enter the AI field?

Find a niche and then become one of the best people in the world at it. I had two things that have helped me build credibility, the first was I was advocating for policies to reduce algorithmic bias, while people in academia began to discuss the issue. This gave me a first-mover advantage in the “solutions space” and made AI for the People an authority on the Hill five years before the executive order. The second thing I would say is look at your deficiencies and address them. AI for the People is four years old and I have been gaining the academic credentials I need to ensure I am not pushed out of thought leader spaces. I cannot wait to graduate with a Masters from Columbia in May and hope to continue researching in this field.

What are some of the most pressing issues facing AI as it evolves?

I am thinking heavily about the strategies that can be pursued to involve more Black and people of color in the building, testing, and annotating of foundational models. This is because the technologies are only as good as their training data, so how do we create inclusive datasets at a time that DEI is being attacked, Black venture funds are being sued for targeting Black and female founders, and Black academics are being publicly attacked, who will do this work in the industry?

What are some issues AI users should be aware of?

I think we should be thinking about AI development as a geopolitical issue and how the United States could become a leader in truly scalable AI by creating products that have high efficacy rates on people in every demographic group. This is because China is the only other large AI producer, but they are producing products within a largely homogenous population, and even though they have a large footprint in Africa. The American tech sector can dominate that market if aggressive investments are made into developing anti-bias technologies.

What is the best way to responsibly build AI?

There needs to be a multi-prong approach, but one thing to consider would be pursuing research questions that center on people living on the margins of the margins. The easiest way to do this is by taking notes of cultural trends and then considering how this impacts technological development. For example, asking questions like how do we design scalable biometric technologies in a society where more people are identifying as trans or nonbinary?

How can investors better push for responsible AI?

Investors should be looking at demographic trends and then ask themselves will these companies be able to sell to a population that is increasingly becoming more Black and brown because of falling birth rates in European populations across the globe? This should prompt them to ask questions about algorithmic bias during the due diligence process, as this will increasingly become an issue for consumers.

There is so much work to be done on reskilling our workforce for a time when AI systems do low-stakes labor-saving tasks. How can we make sure that people living at the margins of our society are included in these programs? What information can they give us about how AI systems work and do not work from them, and how can we use these insights to make sure AI truly is for the People?



Source link

by Team SNFYI

Facebook is testing a new feature that invites some users—mainly in the US and Canada—to let Meta AI access parts of their phone’s camera roll. This opt-in “cloud processing” option uploads recent photos and videos to Meta’s servers so the AI can offer personalized suggestions, such as creating collages, highlight reels, or themed memories like birthdays and graduations. It can also generate AI-based edits or restyles of those images. Meta says this is optional and assures users that the uploaded media won’t be used for advertising. However, to enable this, people must agree to let Meta analyze faces, objects, and metadata like time and location. Currently, the company claims these photos won’t be used to train its AI models—but they haven’t completely ruled that out for the future. Typically, only the last 30 days of photos get uploaded, though special or older images might stay on Meta’s servers longer for specific features. Users have the option to disable the feature anytime, which prompts Meta to delete the stored media after 30 days. Privacy experts are concerned that this expands Meta’s reach into private, unpublished images and could eventually feed future AI training. Unlike Google Photos, which explicitly states that user photos won’t train its AI, Meta hasn’t made that commitment yet. For now, this is still a test run for a limited group of people, but it highlights the tension between AI-powered personalization and the need to protect personal data.

by Team SNFYI

News Update Bymridul     |    March 14, 2024 Meesho, an online shopping platform based in Bengaluru, has announced its largest Employee Stock Ownership Plan (ESOP) buyback pool to date, totaling Rs 200 crore. This buyback initiative extends to both current and former employees, providing wealth creation opportunities for approximately 1,700 individuals. Ashish Kumar Singh, Meesho’s Chief Human Resources Officer, emphasized the company’s commitment to rewarding its teams, stating, “At Meesho, our employees are the driving force behind our success.” Singh further highlighted the company’s dedication to providing opportunities for wealth creation despite prevailing macroeconomic conditions. This marks the fourth wealth generation opportunity at Meesho, with the size of the buyback program increasing each year. In previous years, Meesho conducted buybacks worth over Rs 8.2 crore in February 2020, Rs 41.4 crore in November 2020, and Rs 45.5 crore in October 2021. Meesho’s profitability journey began in July 2023, making it the first horizontal Indian e-commerce company to achieve profitability. Despite turning profitable, Meesho continues to maintain positive cash flow and focuses on enhancing efficiencies across various cost items. The company’s revenue from operations for FY 2022-23 witnessed a remarkable growth of 77% over the previous year, amounting to Rs 5,735 crore. This growth can be attributed to Meesho’s leadership position as the most downloaded shopping app in India in both 2022 and 2023, increased transaction frequency among existing customers, and a diversified category mix. Additionally, Meesho’s focus on improving monetization through value-added seller services contributed to its revenue growth. Meesho also disclosed its audited performance for the first half of FY 2023-24, reporting consolidated revenues from operations of Rs 3,521 crore, marking a 37% year-over-year increase. The company achieved profitability in Q2 FY24, with a significant reduction in losses compared to the previous year. Furthermore, Meesho recorded impressive app download numbers, reaching 145 million downloads in India in 2023 and surpassing 500 million downloads in H1 FY 2023-24. Follow Startup Story Source link

by Team SNFYI

You might’ve heard of Grok, X’s answer to OpenAI’s ChatGPT. It’s a chatbot, and, in that sense, behaves as as you’d expect — answering questions about current events, pop culture and so on. But unlike other chatbots, Grok has “a bit of wit,” as X owner Elon Musk puts it, and “a rebellious streak.” Long story short, Grok is willing to speak to topics that are usually off limits to other chatbots, like polarizing political theories and conspiracies. And it’ll use less-than-polite language while doing so — for example, responding to the question “When is it appropriate to listen to Christmas music?” with “Whenever the hell you want.” But Grok’s ostensible biggest selling point is its ability to access real-time X data — an ability no other chatbots have, thanks to X’s decision to gatekeep that data. Ask it “What’s happening in AI today?” and Grok will piece together a response from very recent headlines, while ChatGPT, by contrast, will provide only vague answers that reflect the limits of its training data (and filters on its web access). Earlier this week, Musk pledged that he would open source Grok, without revealing precisely what that meant. So, you’re probably wondering: How does Grok work? What can it do? And how can I access it? You’ve come to the right place. We’ve put together this handy guide to help explain all things Grok. We’ll keep it up to date as Grok changes and evolves. How does Grok work? Grok is the invention of xAI, Elon Musk’s AI startup — a startup reportedly in the process of raising billions in venture capital. (Developing AI’s expensive.) Underpinning Grok is a generative AI model called Grok-1, developed over the course of months on a cluster of “tens of thousands” of GPUs (according to an xAI blog post). To train it, xAI sourced data both from the web (dated up to Q3 2023) and feedback from human assistants that xAI refers to as “AI tutors.” On popular benchmarks, Grok-1 is about as capable as Meta’s open source Llama 2 chatbot model and surpasses OpenAI’s GPT-3.5, xAI claims. Image Credits: xAI Human-guided feedback, or reinforcement learning from human feedback (RLHF), is the way most AI-powered chatbots are fine-tuned these days. RLHF involves training a generative model, then gathering additional information to train a “reward” model and fine-tuning the generative model with the reward model via reinforcement learning. RLHF is quite good at “teaching” models to follow instructions — but not perfect. Like other models, Grok is prone to hallucinating, sometimes offering misinformation and false timelines when asked about news. And these can be severe — like wrongly claiming that the Israel–Palestine conflict reached a ceasefire when it hadn’t. For questions that stretch beyond its knowledge base, Grok leverages “real-time access” to info on X (and from Tesla, according to Bloomberg). And, similar to ChatGPT, the model has internet browsing capabilities, enabling it to search the web for up-to-date information about topics. Musk has promised improvements with the …