Artificial Intelligence: Week #9 | 2021

This week in AI & Machine Learning: Robotic doctors, SEER, Detectron2 mobile, multimodal neurons, green AI,  why production machine learning fails, and more.

Don’t want to miss new articles or tutorials? You can subscribe to our publication on medium to get weekly AI news and more! 

Happy IWD 2021!

For #InternationalWomensDay2021, Sixgill recognizes & honors the achievements of all women, especially those working to shape our future. To celebrate the day, here’s our spotlight on our own Chief Product Officer, Elizabeth Spears.

Artificial Intelligence News:

The (robotic) doctor will see you now

A team of researchers from MIT and Brigham and Women’s Hospital conducted a study that found people are pretty receptive interacting with healthcare workers through robots for evaluating symptoms. With the wide practice of social distancing and use of video calls in 2020, it makes sense that people are more comfortable with the technologies in different applications. I would love to have a boston dynamic spot mini as my doctor.  

SEER: The start of a more powerful, flexible, and accessible era for computer vision

The Facebook AI team has released SEER (SElf-supERvised), a self-supervised learning approach for understanding data without labels. This could pave the way for models that can learn how to do almost any task with very few labels or supervision!

Postmates Spins off Serve Robotics

Ater Uber acquired Postmates, the robotic food delivery division was spun off as an independent startup called Serve Robotics. With food deliveries happening now more than ever, smaller autonomous vehicles with potentially smaller carbon footprints doing the driving could make a lot of sense.  

Developer Tools & Education:

D2Go brings Detectron2 to mobile

You will soon be able to easily deploy Detectron2 models to mobile! D2Go is built on top of Detectron2, PyTorch Mobile, and TorchVision. 

PAIRED: A New Multi-agent Approach for Adversarial Environment Generation

Google introduces a new way to create better training environments for reinforcement learning agents. 

Multimodal Neurons in Artificial Neural Networks

OpenAI dives into how the neuron in CLIP, its general-purpose vision system actually functions. This is a really great read. I suggest at least reading the short “Attacks in the wild” section. 

Adversarial attacks with FGSM (Fast Gradient Sign Method)

In this tutorial, you will learn how to perform adversarial attacks using the Fast Gradient Sign Method (FGSM) and how to implement it in Keras and Tensorflow. 

Why Production Machine Learning Fails — And How To Fix It

Learn how to deploy machine learning models and avoid common problems. 

Upcoming Online AI & Data Science Events:

Practical Approaches for Efficient Hyperparameter Optimization

Mar 16, 10:00AM PST

This talk will walk machine learning practitioners through guidelines for efficient hyperparameter optimization based on Oríon, an open source HPO framework.

Adversarial Attacks on Machine Learning Models

Mar 17, 07:00PM PST

Learn why understanding adversarial attacks on machine learning models is important for model security and reliability. 

Deep Dive Deep Learning Infrastructure with Lambda Labs

Mar 22, 07:00PM PST

This talk will let you know how to think about designing and selecting  infrastructure for deep learning. 

Intro to Computer Vision: Building Object Detection Models and Datasets

Mar 24, 10:00am PST

Build your own object detection model from start to finish. Includes how to do data annotation and model training on your own dataset.

Interesting Podcasts & Interviews:

Fairness in A.I. | Super Data Science

Ayodele Odubela, a Data Science Advocate at Comet ML discusses the historic biases in data and models. 

Building the Cambridge-1 Supercomputer During a Pandemic | NVIDIA

Marc Hamilton, NVIDIA’s vice president of solutions architecture and engineering tackles building the U.K.’s most powerful supercomputer (Cambridge-1) during the pandemic.  

Common Sense Reasoning in NLP with Vered Shwartz | TWiML

Vered Shwartz, a Postdoctoral Researcher at both the Allen Institute for AI and the Paul G. Allen School of Computer Science discusses common sense reasoning for natural language processing (NLP). 

How to Be Human in the Age of AI with Ayanna Howard | TWiML

Ayanna Howard, the Dean of the College of Engineering at The Ohio State University discusses the topic of her new book, and her research on the relationships between humans and robots.

Green AI | Practical AI

Roy Schwartz (Hebrew University of Jerusalem) and Jesse Dodge (AI2) suggest the AI research community should pay more attention to efficiency and the carbon footprint of AI.

More Sixgill Blog Posts: