@Arghya_Maity

Arghya Maity

@Arghya_Maity

Data Analysis
Artificial Intelligence
Machine Learning
Deep Learning
Reinforcement Learning

Intern, IISER KOLKATA

Medinipur Sadar, India

I have always been interested in sustainability research, mainly focusing on improvements in semiconductor technology and computation optimisation using Machine Learning and AI. Thus, the goals of my research are to solve real-life problems.
Artificial Intelligence and Machine Learning are the accessories that sustainable development requires to plan, perform and guide the future of our planet and its sustainability more effectively. Technology like AI will assist us in building more efficiently, using resources sustainably and reducing and managing the waste we generate more productively.
In my project on building a chess module with python under the guidance of Dr Kripabandhu Ghosh of IISER Kolkata, where I am working towards building a chess game with the help of python and using Machine Learning to predict the best possible moves of the pieces based on numerous past games of grandmasters, with the help of the libraries like pandas, copy, TensorFlow, I applied nested loops and functions to generate the board, possible moves and coupes of each piece; now, I am working on classifying the possible moves as good or bad with the help of machine learning based on a large dataset consisting of moves of past grandmaster games, with the past winner's moves being classified as good and others as bad.
Moreover, in the project on Fashion-MNIST, where the objective was to identify different fashion products from the given images and compare their results to arrive at the best ML model and classify them into ten categories, using the libraries like TensorFlow, TensorFlow Datasets, Keras, Keras Tuner, I applied deep learning on the Fashion-MNIST dataset, used the Sequential model, with 2 hidden layers, each of size 512, using "ReLU" (Rectified Linear Unit) as the activation function for both the hidden layers and Softmax as the activation function for the output layer; then, I chose the optimiser to be "Adam", the loss function to be "Sparse Categorical Crossentropy"; finally, I ran it for 10 epochs. (I also tried with tanh and sigmoid as activation functions, but they produced less accurate predictions). Besides, I used Keras Tuner to find my ANN model's best set of hyperparameters. After training the model on the training data and validating the validation data, I tested the final prediction power of my model by running it on the test dataset that the algorithm had never seen before and got a test loss and test accuracy with each run. Thus, the probability of overfitting the validation dataset is also overruled. (The value of the loss function and accuracy were pretty consistent, with values around 0.35 and 89 %, respectively, on all the runs.)
Along with these, my courses on Data Science, Machine Learning, Artificial Intelligence and Data Structure and Algorithm, where I have learned in detail, did practice assignments and capstone projects on Data Analysis and Machine Learning with Excel and Python (with libraries like NumPy, pandas, sklearn, statsmodels and TensorFlow), on Artificial Intelligence and reinforcement learning, also with python (with libraries like Numpy, PyTorch, Kivy) and on Data Structure and Algorithm with C, would be greatly helpful in giving me enough insight for being involved in reinforcement learning, deep neural and Q networks, optimisation functions etc., which include data analysis, Machine Learning and Artificial Intelligence as their integral parts.