Web Scraping Tutorial Using Selenium & Python in 2024

Table of Contents

Data is universally required to address issues in research and business. Forms, interviews, surveys, and questionnaires are all ways to get data, but they don’t fully utilize the largest data source.There is a tonne of knowledge available on the Internet on practically any topic you can imagine. Sadly, most websites don’t allow users to save and keep the data that appears on their pages. Users can now scrape vast amounts of the necessary data by using web scraping, which fixes this issue.

This blog will discuss what is selenium web scraping, Selenium and Python for web scraping, and other topics.

Selenium Online Training

What is Selenium Web Scraping?

The technique of obtaining data from websites is known as web scraping.It’s a really potent method that transforms data gathering and processing. Because there is so much data on the internet, web scraping has become an essential tool for both consumers and corporations.

Automating online browsing functions is possible with Selenium, an open-source web development tool. Originally created in 2004, its primary function automatically tests apps and web pages across different browsers. However, it is now becoming a well-liked web scraping tool. Python, Java, and C# are just a few of the programming languages that Selenium is compatible with. It has strong APIs for interacting with web pages, including scrolling, clicking, typing, and navigating.

By combining Python with the Selenium browser automation tool, Selenium web scraping can retrieve data from various websites.Selenium allows developers to interact with websites as human users would by offering programmatic web browser control.. Check out Online Selenium Training. Enroll now!

Want Free Career Counseling?

Just fill in your details, and one of our expert will call you !

Why Use Selenium and Python for Web Scraping?

Python is one of the most popular languages for web scraping because it offers modules and frameworks that simplify data extraction from websites.

Compared to alternative web scraping methods, using Python and Selenium for web scraping has the following benefits:

Dynamic websites: JavaScript or other scripting languages are used to generate dynamic web pages. Elements of these sites are often obvious when the page loads completely or when the user interacts with it. Selenium is useful for data scraping from changing websites because it can talk to different parts..

User interactions: Clicks, form submissions, and scrolling are just a few of the behaviors that Selenium may mimic.This enables you to scrape webpages that require user input, like login forms.

Debugging:At each stage of the scraping process, you may observe what the scraper is doing by running Selenium in debug mode. This aids in troubleshooting when something goes wrong. Enroll our Selenium classes in Pune and become an expert in Software Testing.

Prerequisites for Web Scraping with Selenium

You have Python 3 installed on your computer.

 Installed the Selenium library. Using pip, you can install it by running the following command:

pip install Selenium

WebDriver Installed

Selenium employs a different executable called WebDriver to manage the browser. I was able to obtain WebDriver for the most widely used browsers using the following links:

  • Chrome: chromedriver.downloads.a/sites.google.com/a/chromium.org

  • Firefox: Go to https://github.com/mozilla/geckodriver/releases

  • Microsoft Edge:  https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/

Alternatively, and by far the simplest method, you can use a package manager such as web driver-manager to install the WebDriver. This will start downloading and installing the proper WebDriver for you immediately. To install web driver-manager, use this command:

pip install webdriver-manager

Get Free Career Counseling from Experts !

A Step-By-Step Guide to Selenium Web Scraping

Step 1: Setup and Imports

Installing Selenium and a suitable driver was our top priority before we began. Here, we’ll use the Edge driver as an example.

from selenium import webdriver

from Selenium.webdriver.common.keys import Keys

from Selenium.webdriver.common.by import By

Step 2: Setup and Login to WebDriver

We may use the following code to launch a fresh instance of the Edge driver:

driver = webdriver.Edge()

Step 3: Use Python to Visit the Website

We must then visit the search engine’s website. Here, we’ll be utilizing Bing.

driver.get(“https://www.bing.com”)

Step 4: Find the Particular Information You’re Scraping

The quantity of search results for a specific name is what we wish to retrieve. Finding the HTML element that holds the quantity of search results will allow us to accomplish this.

results = driver.find_elements(By.XPATH, “//*[@id=’b_tween’]/span”)

Step 5: Complete the task jointly

Now that we have every component, we can combine them to retrieve the search results for a certain name.

try:

search_box = driver.find_element(By.NAME, “q”)

search_box.clear()

search_box.send_keys(“John Doe”) # enter your name in the search box

search_box.submit() # submit the search

results = driver.find_elements(By.XPATH, “//*[@id=’b_tween’]/span”)

for result in results:

text = result.text.split()[1] # extract the number of results

print(text)

# save it to a file

with open(“results.txt”, “w”) as f:

f.write(text)

except Exception as e:

print(f”An error occurred: {e}”)

Enroll Now for Selenium With Python Online Training.

Do you want to book a FREE Demo Session?

Step 6: Keep the information stored

We can now save the extracted data in a text file.

as f, using open(“results.txt”, “w”)

f.write(text)

Using A Proxy with Selenium Wire

The package known as Selenium Wire expands on Selenium’s capabilities by enabling you to examine and alter HTTP requests and answers. For example, you might utilize it to configure a proxy for your Selenium WebDriver swiftly.

Install Selenium Wire

pip install selenium-wire

Set Up The Proxy

from selenium import webdriver

from Selenium.webdriver.chrome.options import Options

from seleniumwire import webdriver as wiredriver

PROXY_HOST = ‘your.proxy.host’

PROXY_PORT = ‘your_proxy_port’

chrome_options = Options()

chrome_options.add_argument(‘–proxy-server=http://{}:{}’.format(PROXY_HOST, PROXY_PORT))

driver = wiredriver.Chrome(options=chrome_options)

Interested to begin a career in Software Testing? Enroll now for Selenium with Python Course.

Meet the industry person, to clear your doubts !

Use Selenium Wire to Inspect and Modify Requests

for request in driver.requests:

if request.response:

print(request.url, request.response.status_code, request.response.headers[‘Content-Type’])

In the code above, we make a loop that goes through all of the WebDriver’s calls while it is web scraping. We print the message’s URL, status code, and content type as soon as we receive a response to a request.

 Using Selenium to get all of a page’s names

Here is an example of Python code that scrapes all the page names with Selenium:

from selenium import webdriver

# Initialize the webdriver

driver = webdriver.Chrome()

# Navigate to the webpage

driver.get(“https://www.example.com”)

# Find all the title elements on the page

title_elements = driver.find_elements_by_tag_name(“title”)

# Extract the text from each title element

titles = [title.text for title in title_elements]

# Print the list of titles

print(titles)

# Close the webdriver

driver.quit()

In this example, we import the web driver module from Selenium and then create a new instance of the Chrome web driver. After navigating to the webpage, we need to scrape and locate every title element on the page using the find_elements_by_tag_name method.

After extracting the content from each title element using a list comprehension, we save the final list of titles in a variable named titles. Lastly, we terminate the web driver instance and print the list of titles.

Note that your Python environment must have the Chrome web driver and Selenium libraries installed for this code to function. Using pip, you may install them as follows:

pip install selenium chromedriver-binary

Don’t forget to update the driver’s URL, either. Gather a means of directing people to the target webpage for scraping..Want a brighter career as a software tester? check out Software Testing Course in Pune 


Get FREE career counselling from Experts !

The Bottom Line

In conclusion, retrieving data from websites is a successful use case for web scraping using Selenium.  lets you automate data collection, which saves a ton of time and effort.. You may interact with websites just like a real user would with Selenium and get the required info faster.

Alternatively, you can quickly extract every text element from HTML using no-code technologies like Nanonets’ website scraper tool. It is totally free to use. To master the skills from industry experts at 3RI Technologies.

Get in Touch

3RI team help you to choose right course for your career. Let us know how we can help you.