Best AI Papers of 2020 Broach GPT-3 Large Language Model Concerns 

By AI Trends Staff   The Best AI Papers of 2020 were called out by a writer at GitHub, who posts a video explanation link to each one, a link to a more in-depth article and some code.  “In the field of AI, many important aspects were highlighted this year, like the ethical aspects and important biases,” […]

Nov 30, -0001 - 00:00
 0
Best AI Papers of 2020 Broach GPT-3 Large Language Model Concerns 
Techatty All-in-1 Publishing
Techatty All-in-1 Publishing

By AI Trends Staff  

The Best AI Papers of 2020 were called out by a writer at GitHub, who posts a video explanation link to each one, a link to a more in-depth article and some code. 

Louis-Francois Bouchard, AI research scientist

“In the field of AI, many important aspects were highlighted this year, like the ethical aspects and important biases,” stated Louis-Francois Bouchard of Quebec, Canada, a self-described “master student,” AI research scientist and speaker, in the list posted at GitHub. “Artificial intelligence and our understanding of the human brain and its link to AI is constantly evolving, showing promising applications,” he states.  

Here is a video summary  of the best AI papers of 2020, and here are selected highlights: 

YOLOv4: Optimal Speed and Accuracy of Object Detection 

Talk to Techatty
Talk to Techatty

The main goal of Alexey Bochkovsky and his coauthors in the paper  “YOLOv4: Optimal Speed and Accuracy of Object Detection” is to make a super-fast object detector with high quality and accuracy.  

Many features are said to improve Convolutional Neural Network (CNN) accuracy. Testing of features on large datasets is required, Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets. Results included a real-time speed of ~65 frames per second (FPS) on a Tesla V100.  

The authors introduced a new method of data augmentation called Mosaic and Self-adversarial training.  

The authors are Alexey BochkovskiyChien-Yao Wang and Hong-Yuan Mark Liao. 

 

Web and Cloud LLC - talk to us and let's discuss your needs.
Let's help transform your business

DeepFaceDrawing: Deep Generation of Face Images from Sketches  

Researchers at the Institute of Computing Technology, Chinese Academy of Sciences, did a study on generating deep face drawing from sketches, with zero drawing skills required.   

“Our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch,” the authors state. “Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches,” they add.  

Here is a video demonstration of the deep face drawing technology.   

The authors are: Shu-Yu ChenWanchao SuLin GaoShihong Xia, and Hongbo Fu. 

GPT-3: Language Models are Few-Shot Learners  

The current state-of-the-art natural language processing (NLP) systems struggle to generalize to work on different tasks. They need to be fine-tuned on datasets of thousands of examples while humans only need to see a few examples to perform a new language task. This was the goal behind GPT-3, to improve the task-agnostic characteristic of language models. 

“We train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting,” the authors state. “GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic,” they add. “Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.” 

Language models “have potentially harmful applications,” the authors state in a section on Implications. For example, “Any socially harmful activity that relies on generating text could be augmented by powerful language models. Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting. Many of these applications bottleneck on human beings to write sufficiently high quality text,” the authors state. 

Here is a video summary of GPT-3.  

 

Learning to Cartoonize Using White-box Cartoon Representations 

This paper presents an approach for image cartoonization. By observing the cartoon painting behavior and consulting artists, the authors propose to separately identify three white-box representations from images: the surface representation, the structure representation and the texture representation.  

A Generative Adversarial Network (GAN) framework is used to learn the extracted representations and to cartoonize images.   

There is a video summary of cartoonize research. 

The paper’s authors are: Xinrui Wang of the ByteDance AI Lab, and Jinze Yu of Style2Paints Research. 

Techatty Connecting the world of tech differently! Read. Write. Learn. Thrive. Make an informed decision without distractions. We are building tech media and publication networks to connect YOU and everyone to reliable information, opportunities, and resources to achieve greater success.