Deep Learning Simplified 💻🧠

Deep Learning Simplified 💻🧠

"For the Contributors, By the Contributors!"✨ An Open Source Project Repository, maintained by, Abhishek Sharma👨‍💻

881
Deep Learning Simplified 💻🧠

Deep Learning Simplified 💻🧠

"For the Contributors, By the Contributors!"✨ An Open Source Project Repository, maintained by, Abhishek Sharma👨‍💻

The problem Deep Learning Simplified 💻🧠 solves

đź“Ś Introduction
Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data. Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. The concept of deep learning is not new. It has been around for a couple of years now. It’s on hype nowadays because earlier we did not have that much processing power and a lot of data. As in the last 20 years, the processing power increases exponentially, deep learning and machine learning came in the picture.

Deep Learning Simplified is an Open-source repository, containing beginner to advance level deep learning projects for the contributors, who are willing to start their journey in Deep Learning.

🧮 Structure and Workflow
This repository consists of various machine learning projects, and all of the projects must follow a certain template. I wish the contributors will take care of this while contributing in this repository.

  • Dataset - This folder stores the dataset used in this project. If the Dataset is not being able to uploaded in this folder due to the large size, then put a README.md file inside the Dataset folder and put the link of the collected dataset in it. That'll work!

  • Images - This folder is used to store the images generated during the data analysis, data visualization, data segmentation of the project.

  • Model - This folder would have your project file (that is .ipynb file) be it analysis or prediction. Other than project file, it should also have a README.md using this template and requirements.txt file which would be enclosed with all needed add-ons and libraries that are included in the project.

Open Source Programs participated: Social Summer of Code, 2022

Challenges I ran into

âś… Learning without Supervision
Deep learning models are one of, if not the most data-hungry models of the Machine Learning world. They need huge amounts of data to reach their optimal performance and serve us with the excellence we expect from them. However, having this much data is not always easy. Additionally, while we can have large amounts of data on some topic, many times it is not labelled so we can not use it to train any kind of supervised learning algorithm.

âś… Coping with data from outside the training distribution
Data is dynamic, it changes through different drivers like time, location, and many other conditions. However, Machine Learning models, including Deep Learning ones, are built using a defined set of data (the training set) and perform well as long as the data that is later used to make predictions once the system is built comes from the same distribution as the data the system was built with. This makes them perform poorly when data that is not entirely different, but that does have some variations from the training data is fed to them.

âś… Incorporating Logic
Incorporating some sort of rule based knowledge, so that logical procedures can be implemented and sequential reasoning used to formalise knowledge. While these cases can be covered in code, Machine Learning algorithms don’t usually incorporate sets or rules into their knowledge. Kind of like a prior data distribution used in Bayesian learning, sets of pre-defined rules could assist Deep Learning systems in their

âś…The Need for less data and higher efficiency
Although we kind of covered this in our first two sections, this point is really worth highlighting. The success of Deep Learning comes from the possibility to incorporate many layers into our models, allowing them to try an insane number of linear and non-linear parameter combinations. However, with more layers comes more model complexity and we need more data for this model to function correctly.

Discussion