Open in app

Sign In

Write

Sign In

Akihiro FUJII
Akihiro FUJII

489 Followers

Home

About

Published in

Towards Data Science

·Pinned

Machine Learning 2020 summary: 84 interesting papers/articles

In this article, I present a total of 84 papers and articles published in 2020 that I found particularly interesting. For the sake of clarity, I divide them into 12 sections. My personal summary for 2020 is as follows. — — — — — My personal summary for 2020 — — — — — In 2020, Transformer model made a huge leap forward. In natural…

Machine Learning

36 min read

Machine Learning 2020 summary: 84 interesting papers/articles
Machine Learning 2020 summary: 84 interesting papers/articles
Machine Learning

36 min read


Dec 23, 2021

Akira’s Machine Learning News — Issue #38

Featured Paper/News in This Week. A 3D-Transformer that can be directly applied to molecular structures has been proposed. The attention weights can be adjusted according to the interatomic distances, and the computational complexity does not seem to be that high. — — — — — — — — — —…

Machine Learning

4 min read

Akira’s Machine Learning News — Issue #38
Akira’s Machine Learning News — Issue #38
Machine Learning

4 min read


Dec 23, 2021

MAE/SimMIM for Pre-Training Like a Masked Language Model

About this post In this post, I introduce a recently published method for self-supervised learning in a framework similar to masked language models. The two papers introduced in this article are MAE (He et al., (2021) and SimMIM (Xie et al., 2021). Each of them can be lightly summarized as follows. MAE The authors…

Deep Learning

11 min read

MAE/SimMIM for pre-training like a masked language model
MAE/SimMIM for pre-training like a masked language model
Deep Learning

11 min read


Nov 27, 2021

Akira’s Machine Learning News — Issue #36

Featured Paper/News in This Week. SimMIM, a model for image pre-training with a structure similar to a masked language model, has been presented. The concept is similar to MAE introduced last week, but this is a simpler implementation. It may become more common to use masks to pre-train images. The…

Machine Learning

5 min read

Akira’s Machine Learning News — Issue #36
Akira’s Machine Learning News — Issue #36
Machine Learning

5 min read


Nov 27, 2021

Akira’s Machine Learning News — Issue #35

Featured Paper/News in This Week. A method is proposed to mask the image and pre-train the model to recover it, like BERT. 75% of the image is masked and only 25% of the unmasked image is input to the encoder, which seems to be memory friendly. An image generation model…

Machine Learning

6 min read

Akira’s Machine Learning News — Issue #35
Akira’s Machine Learning News — Issue #35
Machine Learning

6 min read


Published in

Analytics Vidhya

·Nov 13, 2021

Akira’s Machine Learning News — Issue #34

Featured Paper/News in This Week. There have been a study in the past that have shown that ViT classifies with a more human-like behavior than CNN, but now a new study has been published that shows that ViT correctly classifies even when perturbed on a patch-by-patch basis.CNN …

Machie Learning

7 min read

Akira’s Machine Learning News — Issue #34
Akira’s Machine Learning News — Issue #34
Machie Learning

7 min read


Published in

Analytics Vidhya

·Nov 10, 2021

Akira’s Machine Learning News — Issue #33

Featured Paper/News in This Week. A new dataset for self-supervised learning has been released that can be used for commercial purposes and is portrait rights friendly. …

Machine Learning

6 min read

Akira’s Machine Learning News — Issue #33
Akira’s Machine Learning News — Issue #33
Machine Learning

6 min read


Published in

Analytics Vidhya

·Nov 1, 2021

What is the most important stuff in Vision Transformer?

This blog post describes the paper “Patches Are All You Need?” (Under review, 2021), which was submitted to ICLR2022 (under review as of the end of the Oct.). The ConvMixer proposed in this paper is composed of CNN+BN, and unlike previous Vision Transformer series, it can achieve results even on…

Machine Learning

12 min read

What is the most important stuff in Vision Transformer?
What is the most important stuff in Vision Transformer?
Machine Learning

12 min read


Oct 31, 2021

Akira’s Machine Learning News — Issue #32

Featured Paper/News in This Week. Methods to improve the performance of zero-shot inference have been presented. Since GPT-3 zero-shot inference is used in many applications, any improvement in the performance of zero-shot inference may have a significant social impact. — — — — — — — — — — —…

Machine Learning

6 min read

Akira’s Machine Learning News — Issue #32
Akira’s Machine Learning News — Issue #32
Machine Learning

6 min read


Published in

Analytics Vidhya

·Oct 11, 2021

Akira’s Machine Learning news — #issue 31

Featured Paper/News in This Week. — A published study shows a sudden improvement in generalization performance from random results: overfitting starts at about 10² steps, and a sudden improvement in generalization performance from random prediction is reported at about 10⁶ steps. Thus, weighted decay seems to be the key to generalization. …

Machine Learning

6 min read

Akira’s Machine Learning news — #issue 31
Akira’s Machine Learning news — #issue 31
Machine Learning

6 min read

Akihiro FUJII

Akihiro FUJII

489 Followers

Data Scientist (Engineer) in Japan Twitter : https://twitter.com/AkiraTOSEI LinkedIn : https://www.linkedin.com/mwlite/in/亮宏-藤井-999868122

Following
  • Manish Chablani

    Manish Chablani

  • Sergi Castella i Sapé

    Sergi Castella i Sapé

  • Towaki Takikawa

    Towaki Takikawa

  • piqcy

    piqcy

  • Nuno B

    Nuno B

See all (13)

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech

Teams