Shell-Edunet Skills4Future AICTE Internship

Internship Type: Virtual
Internship Title: Edunet Foundation | Shell | Artificial Intelligence with Green Technology | 4-weeks Virtual Internship
Internship Description:

Dive into the world of Artificial Intelligence with Green Technology and unlock the door to a future filled with innovation and opportunity!

Join the Shell-Edunet Skills4Future AICTE Internship! This is your chance to immerse yourself in hands-on learning of essential technical skills for success. Shell-Edunet Skills4Future AICTE Internship is designed to bridge the employability gap by equipping students with essential technical skills in both Artificial Intelligence (AI) and Green Skills. This certificate-linked program seeks to empower the learners to thrive in the rapidly evolving skill ecosystem, fostering their ability to build successful careers in the dynamic technology sector. Through applying the knowledge of Artificial Intelligence in an efficient way along with the Green Skills to solve the sustainability goals of the society.

Industry experts will mentor throughout the internship. You'll have the opportunity to develop project prototypes to tackle real-world challenges by using your preferred technology track. Work in a student team under your mentor's guidance, you will work in a student team to identify solutions to problems using technology. Selected students will also have the chance to showcase their developed project prototypes at a regional showcase event attended by industry leaders.

About Shell:

Shell is a global energy and petrochemical company operating in over 70 countries, with a workforce of approximately 103,000 employees. The company's goal is to meet current energy demands while fostering sustainability for the future. Leveraging diverse portfolio and talented team, the company drives innovation and facilitates a balanced energy transition. The stakeholders include customers, investors, employees, partners, communities, governments, and regulators. Upholding core values of safety, honesty, integrity, and respect, the company strives to deliver reliable energy solutions while minimizing environmental impact and contributing to social progress.

About Edunet:

Edunet Foundation (EF) was founded in 2015. Edunet promotes youth innovation, tinkering, and helps young people to prepare for industry 4.0 jobs. Edunet has a national footprint of training 300,000+ students. It works with regulators, state technical universities, engineering colleges, and high schools throughout India to enhance the career prospects of the beneficiaries.

Keywords:

AI, Power BI, MI, Data Analytics, Green Skilling, Python Programming, Artificial Intelligence, Computer Vision, Deep Learning, Generative AI, Dashboard Programming, Microsoft Excel, Sustainability

Locations: Pan India
No. of interns: 3000
Amount of stipend per month: ZERO
Qualification: Engineering – 2nd, 3rd & 4th Year Students
Specialization:
Engineering- Computer Science, IT, Electronics and Communication, Electrical engineering, Mechatronics, Data Science

Link: https://internship.aicte-india.org

Perks:
  • Personalized mentorship sessions and collaborative group learning.
  • Opportunities to expedite learning through project-based internships.
  • A holistic learning experience provided by industry experts through knowledge-sharing sessions.
  • Showcase your skills by creating prototypes to solve real-world challenges.
  • Earn certifications from AICTE, Edunet and Industry Partners, boosting your confidence and value to potential future employers.
  • Opportunity to present your project prototypes to a panel of industry experts at a regional showcase event.

Terms of Engagement: 4-Weeks (15th July to 16th August 2025 )

Last date to apply: 30th June 2025

Eligibility Criteria:
  • Age: 17+
  • Pursuing degree in computer science, IT, electronics, mechatronics, and related fields.
  • Students must be able to commit required hours for program in addition to regular academics
  • Students must have basic computer operating and programming skills, as relevant
  • Any exposure to programming is preferred but not mandatory
  • Students should have access to computer/laptop with internet connection, either owned OR through institution

Note: The enrolment of students in the 4-weeks Skills4Future virtual internship is subject to the discretion of the team responsible for the operationalization of the Internship at Edunet Foundation.

Indicative timelines for the internship:
Event Timeline
Onset of registration 03-06-2025
Closing applications for internship registrations 30-06-2025
Orientation of Internship 11-07-2025
Commencement of internship 15-07-2025
Offer letter disbursement for internees 16-07-2025
End of internship 16-08-2025
Awarding certificates 20-08-2025

Advance Machine learning and Artificial Intelligence Project

  1. Automate irrigation using soil moisture and weather data
  2. Classification of Fire Types in India Using MODIS Satellite Data
  3. EV Vehicle Demand Prediction
  4. Greenhouse Gas Emission Prediction
  5. Tree Species Identification

Weekly Completion Tasks

Weekly Completion Tasks

Week 1:

Project Planning and Data Preparation.

  • Define the business problem and set project objectives.
  • Gather relevant datasets and explore potential data sources.
  • Clean and preprocess data by handling missing values, outliers, and encoding.
  • Perform exploratory data analysis (EDA) to understand data patterns.
  • Split data into training, validation, and test sets.

Week 1: 

  • Define the problem and project objectives.
  • Collect and clean the dataset.
  • Perform EDA to understand the data.
  • Split data into training, validation, and test sets

Submission Details:  

Expected content: Student should create github repository and they should upload their jupyter notebook File (.ipynb) on Github repository and share link on week1 submission page

File format: GitHub Repository link where your partial project is uploaded

Project Submission Link - On LMS

Skills4future.in Via GitHub link

 

Week 2: Model Selection and Building

  • Research and choose appropriate models for the task.
  • Implement a baseline model and evaluate its performance.
  • Train various machine learning models (e.g., Random Forest, SVM, Deep Learning).
  • Conduct feature engineering to improve model performance.
  • Apply cross-validation for more reliable model evaluation.

Week 2:

  • Research and choose appropriate models.
  • Implement a baseline model and evaluate it.
  • Train different models and tune hyperparameters.
  • Perform feature engineering for improvement.
  • Use cross-validation to check model reliability 

Submission Details: 

Expected content: Expected content: The student must show the partial output with the help of Jupyter Notebook, saving, sharing the project link where it is uploaded on GitHub link

File format: .ipynb file, .py file

Project Submission Link -

Skills4future.in Via GitHub link

 

Week 3: Model Evaluation and Optimization.

  • Evaluate models using metrics like accuracy, precision, recall, or RMSE.
  • Fine-tune models through hyper parameter optimization and regularization.
  • Perform error analysis to address under fitting or over fitting issues.
  • Implement ensemble methods like bagging or boosting if needed.
  • Use model interpretation techniques to explain predictions.
  • Testing and Iteration
  • Formatting
  • Submit the Project

Week 3:

  • Evaluate models using relevant metrics.
  • Tune hyper parameters for better performance.
  • Perform error analysis to refine the model.
  • Implement ensemble techniques for boosting performance.
  • Interpret model output.

Submission Details: 

Expected content: The student must show the output with the help of Jupyter Notebook, saving, sharing the projects, etc. And also create PPT for project.

File format: .ipynb file, .py file, PPT

Project Submission Link - On LMS

Skills4future.in Via GitHub link

 

Week 4: 

Mock Presentation & Final Presentations 

Week 4: Students should present the project PPT to Experts

About the Project

The dataset is sourced from NASA's Fire Information for Resource Management System (FIRMS), part of the Land, Atmosphere Near real-time Capability for EOS (LANCE). FIRMS provides near real-time and archived fire detection data from NASA’s MODIS (Moderate Resolution Imaging Spectroradiometer) satellite. The dataset includes fire hotspot locations, intensity, and associated metadata to classify fire types (e.g., forest fires, agricultural burns, urban fires).


Learning Objectives

To classify fire types across India using satellite-derived fire detection data and auxiliary geospatial variables (land cover, climate, human activity).

Dataset Acquisition and Preparation:
  • Acquire MODIS active fire data from FIRMS and preprocess for missing values/outliers.
  • Integrate ancillary datasets (land use, weather, population density) to contextualize fire types.
  • Normalize spectral indices (e.g., NDVI) for fire characterization.

Exploratory Data Analysis (EDA):
  • Analyze spatial patterns of fire hotspots across seasons/regions.
  • Visualize correlations between fire intensity and environmental factors (temperature, humidity).
  • Cluster fire events based on spectral signatures and auxiliary features.

Feature Engineering:
  • Derive temporal features (day/night, season) from timestamps.
  • Encode categorical variables (land cover type, administrative boundaries).
  • Compute distance-based features (proximity to urban areas, water bodies).

Machine Learning Model Development:
  • Train a Random Forest Classifier or XGBoost to categorize fire types.
  • Experiment with CNNs for spatial feature extraction (if using raster data).
  • Optimize hyperparameters to address class imbalance (e.g., agricultural fires dominate).

Model Evaluation:
  • Metrics: Precision, Recall, F1-Score (multi-class focus).
  • Validate using ground-truth data from forest departments.
  • Interpret feature importance (e.g., NDVI’s role in forest fire prediction).

Data Source Link: https://www.kaggle.com/datasets/vijayveersingh/nasa-firms-active-fire-dataset-modisviirs?select=fire_nrt_M-C61_565334.csv

About the Project

The project leverages IoT sensor data (soil moisture, temperature, humidity) and satellite-derived weather forecasts to optimize irrigation schedules, reducing water waste while maintaining crop health.

Learning Objectives

To predict irrigation requirements using real-time environmental data and automate decision-making.

Dataset Acquisition and Preparation:
  • Collect sensor data (soil moisture, evapotranspiration) and weather forecasts.
  • Handle missing sensor readings via interpolation or imputation.

EDA:
  • Visualize soil moisture trends against rainfall/irrigation events.
  • Identify drought-prone periods using historical weather data.

Feature Engineering:
  • Lag features for soil moisture persistence.
  • Aggregate weather forecasts at farm-plot resolution.

Model Development:
  • Use Time Series Models (SARIMA, LSTM) short-term irrigation needs.
  • Deploy Reinforcement Learning for dynamic scheduling.

Model Evaluation:
  • Metrics: Water savings (%) vs. crop yield impact.
  • Compare to traditional irrigation baselines.

Data Source Link: https://www.kaggle.com/datasets/mahmoudshaheen1134/irrigation-machine-dataset

About the Project

Uses hyperspectral/LiDAR data to classify tree species in forests, aiding biodiversity monitoring.

Learning Objectives

To map species distribution using remote sensing and spectral signatures.

Dataset Acquisition and Preparation:
  • Source labeled hyperspectral data (e.g., NEON Airborne Observatory).
  • Augment data to address class imbalance (rare species).

EDA:
  • Plot spectral reflectance curves by species.
  • Apply PCA to reduce dimensionality.

Feature Engineering:
  • Extract texture/vegetation indices (NDVI, EVI).
  • Include elevation (LiDAR-derived) as a feature.

Model Development:
  • Random Forest or ResNet (for image-like hyperspectral cubes).
  • Transfer learning with pre-trained CNNs.

Evaluation:
  • Metrics: Per-class accuracy, Cohen’s Kappa.
  • Validate with field surveys.

Data Source Link: https://www.kaggle.com/datasets/mexwell/5m-trees-dataset

About the Project

Predicts EV charging station demand using traffic, weather, and historical usage data.

Learning Objectives

Forecast short-term charging demand to optimize grid load.

Dataset and Preparation:
  • Merge traffic counts, EV registrations, and station usage logs.
  • Engineer temporal features (hour, weekday, holidays).

EDA:
  • Identify peak demand periods and correlations with temperature.

Model Development:
  • Gradient Boosting (XGBoost)? for tabular data.
  • Graph Neural Networks for spatial station relationships.

Evaluation:
  • Metrics: RMSE, MAE for demand prediction.

Data Source Link: https://www.kaggle.com/datasets/sahirmaharajj/electric-vehicle-population-size-2024

About the Project

Predicts CO₂/Methane emissions using industrial activity, satellite data (TROPOMI), and climate variables.

Learning Objectives

Quantify emission sources and project future trends.

Dataset & Preprocessing:
  • Combine satellite emissions data (Sentinel-5P), energy reports, and GDP trends.
  • Address missing values in time series.

EDA:
  • Map emission hotspots and seasonal trends.

Feature Engineering:
  • Lag features for industrial output.
  • Climate anomalies (e.g., droughts affecting wildfires).

Model Development:
  • Prophet or Transformer-based models for long-term forecasts.
  • SHAP values to interpret drivers.

Evaluation:
  • Metrics: MAPE, R² reported emissions.

Data Source Link: https://www.kaggle.com/datasets/sahirmaharajj/supply-chain-greenhouse-gas-emission

Python

  • Introduction to Python: Python, created by Guido van Rossum, is a versatile programming language widely used for web development, data analysis, artificial intelligence, and more.
  • Setting up your Python environment: Choose an Integrated Development Environment (IDE) like Jupyter or VSCode and install libraries using package managers like pip to set up your Python environment efficiently.
  • Data types and variables: Python supports various data types such as numbers, strings, lists, and dictionaries, providing flexibility for diverse programming needs.
  • Operators and expressions: Python offers a range of operators, including arithmetic, comparison, and logical operators, allowing concise expression of complex operations.
  • Conditional statements: Employ conditional statements like if, Elif, and else to execute specific code blocks based on different conditions in your Python programs.
  • Looping constructs: Utilize looping constructs, such as for and while loops, to iterate through data structures or execute a set of instructions repeatedly.
  • Functions: Define functions to encapsulate reusable code, pass arguments, and return values, promoting code modularity and readability in Python.
  • Basic data structures: Python's fundamental data structures, including lists, tuples, and dictionaries, empower efficient storage and manipulation of data in various formats.
  • Data manipulation: Master data manipulation techniques like indexing, slicing, and iterating to extract and transform data effectively in Python.
  • Working with files: Learn file handling in Python for tasks like reading, writing, and processing data from external files.
  • Introduction to modules and libraries: Leverage powerful Python libraries like NumPy for numerical computing and Pandas for data manipulation and analysis to enhance your coding capabilities.
  • Resources:

Power BI

What is Power BI (Business Intelligence)?

Imagine a toolbox that helps you turn a jumble of raw data, from spreadsheets to cloud databases, into clear, visually stunning insights. That's Microsoft Power BI in a nutshell! It's a suite of software and services that lets you connect to various data sources, clean and organize the information, and then bring it to life with interactive charts, graphs, and maps. Think of it as a powerful storyteller for your data, helping you uncover hidden trends, track progress toward goals, and make informed decisions.

Useful Links for Self-Study:

Exploratory Data Analysis (EDA)

  • Introduction to EDA: Exploratory Data Analysis (EDA) involves systematically analyzing and visualizing data to discover patterns, anomalies, and insights, playing a crucial role in understanding the underlying structure of the data.
  • Importing and loading Data: Data can be imported into Python using various formats such as CSV, Excel, or SQL, providing a foundation for EDA and subsequent analysis.
  • Data cleaning and preprocessing: Cleaning and preprocessing steps, including handling missing values, outliers, and inconsistencies, are essential for ensuring the accuracy and reliability of the data.
  • Descriptive statistics: Descriptive statistics, encompassing measures of central tendency and dispersion, offer a summary of the main characteristics of the dataset.
  • Data visualization: Visualizations like histograms, boxplots, and scatter plots provide a powerful means to explore data distributions, relationships, and outliers, enhancing the interpretability of the dataset.
  • Identifying patterns and relationships: EDA enables the identification of patterns and relationships within the data, helping to uncover hidden insights and guide subsequent analysis.
  • Univariate and bivariate analysis: Univariate analysis focuses on individual variables, while bivariate analysis explores relationships between pairs of variables, offering a comprehensive understanding of the dataset's structure.
  • Feature engineering: Feature engineering involves creating new features from existing data, and enhancing the dataset with additional information to improve the performance of machine learning models.
  • Hypothesis generation: EDA findings often lead to hypothesis generation, fostering a deeper understanding of the data and guiding further research questions or analytical approaches.
  • Resources:

Data Visualization

  • Principles of data visualization: Effective data visualizations prioritize clarity, ensuring that the intended message is easily understandable, and accuracy, representing data truthfully and without distortion.
  • Choosing the right chart: Select appropriate chart types, such as bar charts, pie charts, line charts, or maps, based on the nature of your data and the insights you aim to convey.
  • Matplotlib and Seaborn libraries: Matplotlib and Seaborn are powerful Python libraries for creating both simple and advanced visualizations, providing flexibility and customization options.
  • Customizing visuals: Customize visual elements, including colors, labels, axes, and titles, to enhance the overall aesthetics and effectiveness of your data visualizations.
  • Interactive visualizations: Utilize libraries like Plotly and Bokeh to create interactive visualizations, allowing users to engage with and explore data dynamically.
  • Data storytelling: Data storytelling involves using visuals as a narrative tool to communicate insights effectively, making data more accessible and compelling for a broader audience.
  • Best practices for presenting visualizations: When presenting data visualizations, adhere to best practices such as providing context, focusing on key insights, and ensuring clarity to effectively convey the intended message.
  • Resources: