Ethical AI
Gen AI
min read

Responsible AI and AI agents take center stage at global events

Across two global events—Grace Hopper Celebration India 2024 (GHCI) and NeurIPS—two parallel themes emerged that we think are telling about the direction of AI in coming years. 

Full Name*

Email Address*

Phone Number*

Subject*

Topic

Thank you! Your submission has been received.
We'll get back to you as soon as possible.

In the meantime, we invite you to check out our free resources to help you grow your service business!

Free Resources
Oops! Something went wrong while submitting the form.
Responsible AI and AI agents take center stage at global eventsAbstract background shapes
Table of Contents
Talk to an Expert

As AI becomes centered in nearly every industry and geography, it’s no wonder that any number of topics can be relevant to the industry. 

However, across two global events—Grace Hopper Celebration India 2024 (GHCI) and NeurIPS—two parallel themes emerged that we think are telling about the direction of AI in coming years. 

(PS - these events are huge! Read to the end to see our recommended sessions, papers, and resources from each.)

First, Responsible AI has to be an overarching component of developing your ML models.

Supply chain transparency, fair working conditions, ethical business practices, and environmental considerations are being scrutinized by enterprise businesses and end-users alike. 

From Vidushi Meharia, Sama Implementations Engineer:

The emphasis on human-in-the-loop AI resonated deeply; it was a powerful reminder that technology must remain a tool for empowerment, not alienation. The call for startups to prioritize trust and governance over rapid growth felt like a moral compass, steering AI towards long-term societal benefits. These sessions left me both hopeful and motivated to integrate these principles into my work. (GHCI)

From Naveena Pius, Sama Implementations Engineer:

With trust as the cornerstone, the panelists from the session, “The Future is human-in-the-loop: Cultivating Trust in AI” stressed that integrating human judgment ensures ethical, reliable AI. This conversation underscored the importance of aligning AI advancements with human values for a more transparent and accountable future. (GHCI)

From Claudel Rheault, Sama Human and AI Lead:

An extremely inspiring and necessary conversation is around the impact of AI on our planet. The paper ‘’A water efficiency dataset for African Datacenters’’  by researchers from Carnegie Mellon Africa was particularly interesting. More data makes us more equipped to make decisions. (NeurIPS)

And second, AI applications and use cases will revolutionize how we work.

But the revolution runs on data—and its quality is paramount to its success. Removing biases, collecting diverse datasets, and understanding the interactions between your existing workforce and any AI agents that you deploy should all be considered.

Claudel:

WorkArena++ from ServiceNow is cool because AI agents will potentially have a huge impact on how we work—and the paper shares a great baseline on how to evaluate them, and even presents mechanisms on how to generate ground truth action traces to then fine tune your models. It goes in the direction of ‘’AI agents, then what?’’ The evaluation of AI agents in the real world will be crucial to seeing the impact everyone wants.

Vidushi:

We saw deep insight into evolving technologies as well as the rapid growth in the industry, where every company is trying to incorporate AI applications within their systems. The focus on diversity and inclusion was very heavy across all different sessions, wherein all speakers advocated for the removal of bias and increase in representation. (GHCI)

Naveena:

The discussion stressed the need for high-quality gold datasets for rigorous testing and iterative prototyping. Auditing tools for evaluating LLM performance were stressed as essential for ensuring reliability and mitigating challenges like hallucinations in LLMs. (GHCI)

Sama is a leader in Responsible AI data labeling practices, and is here as a resource for your enterprise AI projects. Reach out before your next project!

Our Top 5 things to watch, read or dig deeper from NeurIPS:

  1. Ilya Sutskever’s presentation Sequence to Sequence Learning with Neural Networks was packed
  2. From Seeing to Doing: Ascending the Ladder of Visual Intelligence - Brilliant talk by Fei Fei Li on the complexities of spatial intelligence. 
  3. This interesting paper from Yale School of Management, where researchers explore a strategy to improve LLM with a teacher model that shares strategies for improvement to a student model.
  4. The best paper in the category ‘’Datasets and Benchmark’’ is a milestone in the world of preference dataset - offering much more fine grained insights into the context around preferences. 
  5. Service Now research team presented WorkArena++ - a benchmark of tasks from knowledge workers workflows. A key step in building AI agents that have the potential to impact the world of work.

Our Top 3 Sessions from GHCI India:

  1. Why is Cloud security everyone’s business? The keynote shed light on the pressing need for robust cloud security in an increasingly interconnected world.
  2. The Future is human-in-the-loop: Cultivating Trust in AI? This panel discussion offered profound insights into building trustworthy AI. It contrasted AI decision-making with AI-assisted decision-making, emphasizing human oversight
  3. Gen AI for the Future of Work. The fireside chat explored the profound implications of generative AI on the evolving workplace. It raised concerns about AI overuse eroding human critical thinking and posed significant questions around intellectual property rights.
Author
The Sama Team
The Sama Team

RESOURCES

Related Blog Articles

Garbage In, Garbage Out: Why Data Accuracy Matters for AI Models
BLOG
5
MIN READ

Garbage In, Garbage Out: Why Data Accuracy Matters for AI Models

The quality of a model's training data can make or break a model: flawed data yields unreliable AI. The biases, errors, or gaps that exist in the training data will be reflected in the model outputs. Model validation helps identify issues early on, before they result in downstream impacts to production.

Read More
Human vs AI Automation: Striking the Right Balance for Accurate Data Labeling and Annotation
BLOG
7
MIN READ

Human vs AI Automation: Striking the Right Balance for Accurate Data Labeling and Annotation

For the majority of model developers, a combination of the two — human and automation — is where you’ll see the best balance between quality and accuracy versus lower costs and efficiency. We’ll explore why humans still need to be in the loop today.

Read More
Which Off-the-Shelf LLM Is Best for Image Captioning?
BLOG
9
MIN READ

Which Off-the-Shelf LLM Is Best for Image Captioning?

We evaluated Llama 3.2, Gemini 1.5 Pro, and Claude 3.5 Sonnet for multi-modal image captioning. Which of these generative AI models came out on top?

Read More
Sama Launches First-of-its-Kind Scalable Training Solution for AI Data Annotation
BLOG
MIN READ

Sama Launches First-of-its-Kind Scalable Training Solution for AI Data Annotation

The proprietary training solution builds on Sama’s commitment to excellence and an industry-leading 99% client acceptance rate by reducing project ramp times by up to 50% while increasing individual annotators’ tag accuracy by 16% and shape accuracy by 15% on average. For Sama’s enterprise clients, this results in higher-quality models going into production faster, saving both time and capital. For Sama employees, this new platform improves the training experience, offers greater understanding of data annotation and AI development principles, and builds their skills for successful long-term careers in the digital economy. 

Read More