10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
Tech

6 Outstanding Papers Presented at NeurIPS 2023 

During the ongoing Neural Information Processing Systems (NeurIPS) annual conference, reviewers and chairpersons are currently evaluating tens of thousands of submissions.

Out of the 13,321 papers submitted by authors and researchers worldwide, the top of the lot have won the outstanding awards this year. Here are the 6 outstanding papers announced by NeurIPS in 2023: 

Outstanding Main Track Papers

Privacy Auditing with One (1) Training Run

Steinke, Nasr, and Jagielski propose an efficient auditing scheme for assessing the privacy of differentially private machine learning (ML) systems in a single training run. They leverage the parallelism of adding or removing multiple training examples independently. They avoid the computational cost of group privacy by analysing the connection between differential privacy and statistical generalisation.

 Their approach works in both black-box and white-box settings, requiring minimal assumptions about the algorithm. They demonstrate the effectiveness of their framework on DP-SGD, achieving meaningful privacy bounds with just one model, while standard methods would need hundreds of models.

Are Emergent Abilities of Large Language Models a Mirage?

Schaeffer, Miranda, and Koyejo challenge the idea that large language models (LLMs) exhibit true emergent abilities. They propose that perceived emergent abilities are often a result of the researcher’s metric choices rather than fundamental changes in model behaviour with scale. They support this with a mathematical model and three analyses:

Confirming predictions on metric effects using InstructGPT/GPT-3

Validating predictions in a meta-analysis on BIG-Bench

Demonstrating how metric choices can create apparent emergent abilities in vision tasks across different networks.

Their findings suggest that alleged emergent abilities may vanish with different metrics, questioning the notion that they are intrinsic to scaled AI models.

Runner-Ups

Scaling Data-Constrained Language Models

In the paper, researchers explored scaling language models in data-limited scenarios, given the potential constraint on internet text data. They conducted extensive experiments, varying data repetition and computed budgets of up to 900 billion tokens and 9 billion parameters. Results showed that with limited data and a fixed computing budget, up to 4 epochs of repeated data had minimal impact on loss. However, further repetition diminished the value of additional compute. 

They proposed a scaling law for compute optimality, considering the declining value of repeated tokens and excess parameters. Additionally, they tested methods to alleviate data scarcity, such as augmenting with code data or removing common filters.

Models and datasets from 400 training runs are freely available on GitHub.

Direct Preference Optimization: Your Language Model is Secretly a Reward Model

Here, researchers introduced Direct Preference Optimization (DPO) as a streamlined alternative to Reinforcement Learning from Human Feedback (RLHF) for controlling large unsupervised language models. Unlike RLHF, DPO avoids the complexity and instability of fitting reward models and fine-tuning. Leveraging a mapping between reward functions and optimal policies, DPO directly optimises a single-stage policy training process, solving a classification problem on human preference data. 

The experiments demonstrate that DPO can effectively align language models with human preferences, outperforming RLHF in sentiment control and improving response quality in summarization and dialogue. Notably, DPO is more straightforward to implement and train.

Datasets and Benchmarks Papers:

ClimSim: A large multi-scale dataset for hybrid physics-ML climate emulation

Machine learning experts have introduced ClimSim, the largest hybrid ML-physics dataset, co-created by climate scientists and ML researchers. With 5.7 billion pairs of input-output vectors, it isolates the impact of high-resolution physics on macro-scale climate states. Global and spanning multiple years, the dataset facilitates emulators compatible with operational climate simulators.

The data and code are released openly to support the development of hybrid ML-physics and high-fidelity climate simulations. 

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models

With the rise of GPT models, practitioners are considering using them for sensitive applications like healthcare and finance, but research reveals undisclosed vulnerabilities. GPT models, including GPT-4, can produce biassed, toxic outputs and unintentionally leak private information. 

Despite GPT-4’s generally improved trustworthiness, it exhibits vulnerability to jailbreaking systems or misleading user prompts. This study highlights previously unrecognised trustworthiness gaps in GPT models.

The benchmark is publicly available on GitHub.

The post 6 Outstanding Papers Presented at NeurIPS 2023  appeared first on Analytics India Magazine.

by Siliconluxembourg

Would-be entrepreneurs have an extra helping hand from Luxembourg’s Chamber of Commerce, which has published a new practical guide. ‘Developing your business: actions to take and mistakes to avoid’, was written to respond to  the needs and answer the common questions of entrepreneurs.  “Testimonials, practical tools, expert insights and presentations from key players in our ecosystem have been brought together to create a comprehensive toolkit that you can consult at any stage of your journey,” the introduction… Source link

by WIRED

B&H Photo is one of our favorite places to shop for camera gear. If you’re ever in New York, head to the store to check out the giant overhead conveyor belt system that brings your purchase from the upper floors to the registers downstairs (yes, seriously, here’s a video). Fortunately B&H Photo’s website is here for the rest of us with some good deals on photo gear we love. Save on the Latest Gear at B&H Photo B&H Photo has plenty of great deals, including Nikon’s brand-new Z6III full-frame… Source link

by Gizmodo

Long before Edgar Wright’s The Running Man hits theaters this week, the director of Shaun of the Dead and Hot Fuzz had been thinking about making it. He read the original 1982 novel by Stephen King (under his pseudonym Richard Bachman) as a boy and excitedly went to theaters in 1987 to see the film version, starring Arnold Schwarzenegger. Wright enjoyed the adaptation but was a little let down by just how different it was from the novel. Years later, after he’d become a successful… Source link