52 Amazing Python
Projects For Developers
52 Amazing Python
Projects For Developers
Edcorner Learning
Table of Contents
Introduction
Python is a general-purpose interpreted, interactive, object- oriented, and a powerful programming language with dynamic semantics. It is an easy language to learn and become expert. Python is one among those rare languages that would claim to be both easy and powerful. Python's elegant syntax and dynamic typing alongside its interpreted nature makes it an ideal language for scripting and robust application development in many areas on giant platforms.
Python helps with the modules and packages, which inspires program modularity and code reuse. The Python interpreter and thus the extensive standard library are all available in source or binary form for free of charge for all critical platforms and can be freely distributed. Learning Python doesn't require any pre- requisites. However, one should have the elemental understanding of programming languages.
This Book consist of 52 Python Projects for All Developers/Students to practice different projects and scenarios. Use these learnings in professional tasks or daily learning projects.
At the end of this book, you can download all this projects by using our link.
All 52 projects are divided into different modules, every project is special in its own way of performing daily task by a developer. Every project has its source codes which learners can copy and practice/use on their own systems. If there is special requirement for any projects, its already mentioned in the book.
Happy learning!!
Module 1 Project 1 -10
1. LinkedIn Email Scraper
## Prerequisites:
1. Do `pip install -r requirements.txt` to make sure you have the necessary libraries.
2. Make sure you have a **chromedriver** installed and added to PATH.
3. Have the **URL** to your desired LinkedIn post ready (*make sure the post has some emails in the comments section*)
4. Have your **LinkedIn** account credentials ready
## Executing Application
1. Replace the values of the URL, email and password variables in the code with your own data
2. Either hit **run** if your IDE has the option or just type in `python main.py` in the terminal.
3. The names and corresponding email address scraped from the post should appear in the **emails.csv** file.
Requirements:
selenium
email-validator
Source Code:
from selenium import webdriver
from email_validator import validate_email, EmailNotValidError
import csv
def LinkedInEmailScraper(userEmail, userPassword):
emailList = {}
browser = webdriver.Chrome()
# example => 'https://www.linkedin.com/posts/faangpath_hiring-womxn-ghc2020-activity-6721287139721650176-QFCV/'
url = '[INSERT URL TO LINKEDIN POST]'
browser.get(url) # visits page of the desired post
browser.implicitly_wait(5)
commentDiv = browser.find_element_by_xpath(
'/html/body/main/section[1]/section[1]/div/div[3]/a[2]'
) # finds comment button
loginLink = commentDiv.get_attribute('href')
browser.get(loginLink)
email = browser.find_element_by_xpath('//*[@id="username"]')
password = browser.find_element_by_xpath('//*[@id="password"]')
email.send_keys(userEmail) # inputs email in email field
password.send_keys(userPassword) # inputs password in password field
submit = browser.find_element_by_xpath(
'//*[@id="app__container"]/main/div[3]/form/div[3]/button')
submit.submit() # submits form
browser.implicitly_wait(5)
commentSection = browser.find_element_by_css_selector(
'.comments-comments-list') # finds the comments section
for _ in range(
): # this can also be set to any number or "while True" if you want it to search through the whole comment section of the post
try:
moreCommentsButton = commentSection.find_element_by_class_name(
'comments-comments-list__show-previous-container'
).find_element_by_tag_name('button')
moreCommentsButton.click()
browser.implicitly_wait(5)
except:
print('End of checking comments')
break
browser.implicitly_wait(20)
comments = commentSection.find_elements_by_tag_name(
'article') # finds all individual comments
for comment in comments:
try:
commenterName = comment.find_element_by_class_name(
'hoverable-link-text') # finds name of commenter
commentText = comment.find_element_by_tag_name('p')
commenterEmail = commentText.find_element_by_tag_name(
'a').get_attribute('innerHTML') # finds email of commenter
# validates email address
validEmail = validate_email(commenterEmail)
commenterEmail = validEmail.email
except:
continue
emailList[commenterName.get_attribute('innerHTML')] = commenterEmail
browser.quit()
return emailList
def DictToCSV(input_dict):
'''
Converts dictionary into csv
'''
with open('./LinkedIn Email Scraper/emails.csv', 'w') as f:
f.write('name,email\n')
for key in input_dict:
f.write('%s,%s\n' % (key, input_dict[key]))
f.close()
if __name__ == '__main__':
userEmail = '[INSERT YOUR EMAIL ADDRESS FOR LINKEDIN ACCOUNT]'
userPassword = '[INSERT YOUR PASSWORD FOR LINKEDIN ACCOUNT'
emailList = LinkedInEmailScraper(userEmail, userPassword)
DictToCSV(emailList)
2. Cricbuzz scrapper
This python script will scrap cricbuzz.com to get live scores of the matches.
## Setup
* Install the dependencies
`pip install -r requirements.txt`
* Run the file
`python live_score.py`
Requirement:
beautifulsoup4==4.9.3
bs4==0.0.1
pypiwin32==223
pywin32==228
soupsieve==2.0.1
urllib3==1.26.5
win10toast==0.9
Source Code:
from urllib.request import urlopen, Request
from bs4 import BeautifulSoup
from win10toast import ToastNotifier
import time
URL = 'http://www.cricbuzz.com/cricket-match/live-scores'
def notify(title, score):
# Function for Windows toast desktop notification
toaster = ToastNotifier()
# toaster.show_toast(score, "Get! Set! GO!", duration=5,icon_path='cricket.ico')
toaster.show_toast("CRICKET LIVE SCORE",
score,
duration=30,
icon_path='ipl.ico')
while True:
request = Request(URL, headers={'User-Agent': 'XYZ/3.0'})
response = urlopen(request, timeout=20).read()
data_content = response
# print(data_content)
# page = urlopen(URL)
soup = BeautifulSoup(data_content, 'html.parser')
update = []
# print(soup)
# print(soup.find_all('div',attrs={'class':'cb-col cb-col-100 cb-plyr-tbody cb-rank-hdr cb-lv-main'}))
for score in soup.find_all(
'div',
attrs={
'class':
'cb-col cb-col-100 cb-plyr-tbody cb-rank-hdr cb-lv-main'
}):
# print(score)
header = score.find('div',
attrs={'class': 'cb-col-100 cb-col cb-schdl'})
header = header.text.strip()
status = score.find('div',
attrs={'class': 'cb-scr-wll-chvrn cb-lv-scrs-col'})
s = status.text.strip()
notify(header, s)