May 5th, 2020
Monitor Competitor Prices with Python and BeautifulSoup

One of the most significant applications of Web Scraping in retail and e-commerce is in monitoring competitor price movements. This, when done well, can mean extra revenue and also will allow the retailer to ensure that they are always in the game and are not taken by surprise by anything the competition is doing.

Here is a simple script that does that. We will use BeautifulSoup to help us extract information, and we will track the prices on Amazon.

To start with, this is the boilerplate code we need to get a page on Amazon and set up BeautifulSoup to help us use CSS selectors to query the page for meaningful data.

# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests

headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9'}
url = 'https://www.amazon.com/s?k=shampoo&ref=nb_sb_noss_1'

response=requests.get(url,headers=headers)


soup=BeautifulSoup(response.content,'lxml')

We are also p[assing the user agent headers to simulate a browser call, so we dont get blocked.

Now let's analyze the Amazon site for players in the Shampoo market. This is how it looks.

And when we inspect the page, we find that each of the items HTML is encapsulated in a tag with the class a-carousel-card.

  • We could just use this to break the HTML document into these cards, which contain individual item information like this.

    # -*- coding: utf-8 -*-
    from bs4 import BeautifulSoup
    import requests
    
    headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9'}
    url = 'https://www.amazon.com/s?k=shampoo&ref=nb_sb_noss_1'
    
    response=requests.get(url,headers=headers)
    
    
    soup=BeautifulSoup(response.content,'lxml')
    
    #print(soup.select('.a-carousel-card')[0].get_text())
    
    for item in soup.select('.a-carousel-card'):
    	try:
    		print(item)
    
    		print('----------------------------------------')
    	except Exception as e:
    		#raise e
    		print('')

    And when you run it.

    python3 PriceTracker.py

    You can tell that the code is isolating the cards HTML, as seen below.

    On further inspection, you can see that the Title of the product is always in the tag. So the code below will get us the text in the

    tag.

    print(item.select('h2')[0].get_text().strip())

    Further, we can find similar clues to get the price, which seems to be inside a tag with class a-price and a span inside it with a class name a-offscreen. We query it like this.

    print(item.select('.a-price .a-offscreen')[0].get_text())

    I was putting the whole thing together.

    # -*- coding: utf-8 -*-
    from bs4 import BeautifulSoup
    import requests
    
    headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9'}
    url = 'https://www.amazon.com/s?k=shampoo&ref=nb_sb_noss_1'
    
    response=requests.get(url,headers=headers)
    
    
    soup=BeautifulSoup(response.content,'lxml')
    
    #print(soup.select('.a-carousel-card')[0].get_text())
    
    for item in soup.select('.a-carousel-card'):
    	try:
    		print('----------------------------------------')
    		print(item.select('h2')[0].get_text().strip())
    		print(item.select('.a-price .a-offscreen')[0].get_text())
    		print(item.select('.a-text-price .a-offscreen')[0].get_text())
    
    		print(item.select('.a-size-base.a-color-secondary')[0].get_text())
    
    		print('----------------------------------------')
    	except Exception as e:
    		#raise e
    		print('')

    And when we run this with

    python3 PriceTracker.py

    We get.

    The name, price, the 'original price,' quantity info.

    You can now save this to a DB and run this script every day or every hour as needed.

    In more advanced implementations, you will need to even rotate the User-Agent string, so Amazon cant tell it the same browser!

    If we get a little bit more advanced, you will realize that Amazon can simply block your IP, ignoring all your other tricks. This is a bummer, and this is where most web crawling projects fail.

    Overcoming IP Blocks

    Investing in a private rotating proxy service like Proxies API can most of the time make the difference between a successful and headache-free web scraping project, which gets the job done consistently and one that never really works.

    Plus, with the 1000 free API calls running an offer, you have almost nothing to lose by using our rotating proxy and comparing notes. It only takes one line of integration to its hardly disruptive.

    Our rotating proxy server Proxies API provides a simple API that can solve all IP Blocking problems instantly.

    • With millions of high speed rotating proxies located all over the world
    • With our automatic IP rotation
    • With our automatic User-Agent-String rotation (which simulates requests from different, valid web browsers and web browser versions)
    • With our automatic CAPTCHA solving technology

    Hundreds of our customers have successfully solved the headache of IP blocks with a simple API.

    A simple API can access the whole thing like below in any programming language.

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    We have a running offer of 1000 API calls completely free. Register and get your free API Key here.

  • Share this article:

    Get our articles in your inbox

    Dont miss our best tips/tricks/tutorials about Web Scraping
    Only great content, we don’t share your email with third parties.
    Icon