Researchers have long tried to pierce the “black box” of deep learning. Even though neural networks have proven remarkably successful at text-to-everything tasks, they have largely remained shrouded in mystery. Buried in layers of complex computations, they defy easy understanding for researchers, making it hard to diagnose errors or biases.
In an effort to understand these ‘black boxes’ better, researchers from nine institutions including the Center for AI Safety (CAIS) in San Francisco have published a paper. The paper provides human overseers insight into the model decision-making processes and indicate when a model is misbehaving.
The research represents a step ahead in interpretability research which has been a pain for researchers universally. The subject has also taken a centre stage in political discussion since Senator Chuck Schumer has called explainability “one of the most important and most difficult technical issues in all of AI.”
The study not only lets humans-in-the-loop pinpoint failures but also enables them to make their respective AI models actively safer. Researchers can now wield control over whether these models adhere to truth or falsehood, morality or immorality, exhibit emotional responses, resist hacking attempts, mitigate biases, and more.
The researchers introduce a representation engineering (RepE) approach to increase transparency in AI systems that draws on insights from cognitive neuroscience. The RepE method places population-level representations, rather than neurons or circuits, at the centre of analysis, equipping the researchers with novel methods for monitoring and manipulating high-level cognitive phenomena in deep neural networks (DNNs).
Through the paper the team has provided baselines and an initial analysis of RepE techniques, showing that they offer simple yet effective solutions for improving the understanding and control of large language models (LLMs).
“Although we have traction on ‘outer’ and ‘inner’ alignment, we should continue working in these directions. At the same time, we should increase our attention for risks that we do not have as much traction on, such as risks from competitive pressures,” suggested Dan Hendrycks, one of the authors of the paper and the director of CAIS in his research released earlier this year. The paper titled — X-Risk Analysis for AI Research — which was also cited in the open letter calling for a 6-month pause on AI development bigger than OpenAI’s GPT4.
The post Researchers Enable Control Over AI Model Behavior, Break Open The ‘Black Box’ appeared first on Analytics India Magazine.








