Skip to main content
Top

2020 | Book

Getting Structured Data from the Internet

Running Web Crawlers/Scrapers on a Big Data Production Scale

insite
SEARCH

About this book

Utilize web scraping at scale to quickly get unlimited amounts of free data available on the web into a structured format. This book teaches you to use Python scripts to crawl through websites at scale and scrape data from HTML and JavaScript-enabled pages and convert it into structured data formats such as CSV, Excel, JSON, or load it into a SQL database of your choice.

This book goes beyond the basics of web scraping and covers advanced topics such as natural language processing (NLP) and text analytics to extract names of people, places, email addresses, contact details, etc., from a page at production scale using distributed big data techniques on an Amazon Web Services (AWS)-based cloud infrastructure. It book covers developing a robust data processing and ingestion pipeline on the Common Crawl corpus, containing petabytes of data publicly available and a web crawl data set available on AWS's registry of open data.

Getting Structured Data from the Internet also includes a step-by-step tutorial on deploying your own crawlers using a production web scraping framework (such as Scrapy) and dealing with real-world issues (such as breaking Captcha, proxy IP rotation, and more). Code used in the book is provided to help you understand the concepts in practice and write your own web crawler to power your business ideas.

What You Will Learn

Understand web scraping, its applications/uses, and how to avoid web scraping by hitting publicly available rest API endpoints to directly get dataDevelop a web scraper and crawler from scratch using lxml and BeautifulSoup library, and learn about scraping from JavaScript-enabled pages using SeleniumUse AWS-based cloud computing with EC2, S3, Athena, SQS, and SNS to analyze, extract, and store useful insights from crawled pagesUse SQL language on PostgreSQL running on Amazon Relational Database Service (RDS) and SQLite using SQLalchemyReview sci-kit learn, Gensim, and spaCy to perform NLP tasks on scraped web pages such as name entity recognition, topic clustering (Kmeans, Agglomerative Clustering), topic modeling (LDA, NMF, LSI), topic classification (naive Bayes, Gradient Boosting Classifier) and text similarity (cosine distance-based nearest neighbors)Handle web archival file formats and explore Common Crawl open data on AWSIllustrate practical applications for web crawl data by building a similar website tool and a technology profiler similar to builtwith.comWrite scripts to create a backlinks database on a web scale similar to Ahrefs.com, Moz.com, Majestic.com, etc., for search engine optimization (SEO), competitor research, and determining website domain authority and rankingUse web crawl data to build a news sentiment analysis system or alternative financial analysis covering stock market trading signalsWrite a production-ready crawler in Python using Scrapy framework and deal with practical workarounds for Captchas, IP rotation, and more

Who This Book Is For

Primary audience: data analysts and scientists with little to no exposure to real-world data processing challenges, secondary: experienced software developers doing web-heavy data processing who need a primer, tertiary: business owners and startup founders who need to know more about implementation to better direct their technical team

Table of Contents

Frontmatter
Chapter 1. Introduction to Web Scraping
Abstract
In this chapter, you will learn about the common use cases for web scraping. The overall goal of this book is to take raw web crawls and transform them into structured data which can be used for providing actionable insights. We will demonstrate applications of such a structured data from a rest API endpoint by performing sentiment analysis on Reddit comments. Lastly, we will talk about the different steps of the web scraping pipeline and how we are going to explore them in this book.
Jay M. Patel
Chapter 2. Web Scraping in Python Using Beautiful Soup Library
Abstract
In this chapter, we’ll go through the basic building blocks of web pages such as HTML and CSS and demonstrate scraping structured information from them using popular Python libraries such as Beautiful Soup and lxml. Later, we’ll expand our knowledge and tackle issues that will make our scraper into a full-featured web crawler capable of fetching information from multiple web pages.
Jay M. Patel
Chapter 3. Introduction to Cloud Computing and Amazon Web Services (AWS)
Abstract
In this chapter, you will learn the fundamentals of cloud computing and get an overview of select products from Amazon Web Services. AWS offers a free tier where a new user can access many of the services free for a year, and this will make almost all examples here close to free for you to try out. Our goal is that by the end of this chapter, you will be comfortable enough with AWS to perform almost all the analysis in the rest of the book on the AWS cloud itself instead of locally.
Jay M. Patel
Chapter 4. Natural Language Processing (NLP) and Text Analytics
Abstract
In the preceding chapters, we have solely relied on the structure of the HTML documents themselves to scrape information from them, and that is a powerful method to extract information.
Jay M. Patel
Chapter 5. Relational Databases and SQL Language
Abstract
Relational databases organize data in rows and tables like a printed mail order catalog or a train schedule list and are indispensable for storing structured information from scraped websites.
Jay M. Patel
Chapter 6. Introduction to Common Crawl Datasets
Abstract
In this chapter, we’ll talk about an open source dataset called common crawl which is available on AWS’s registry of open data (https://registry.opendata.aws/).
Jay M. Patel
Chapter 7. Web Crawl Processing on Big Data Scale
Abstract
In this chapter, we’ll learn about processing web crawl data on a big data scale using distributed computing architecture using Amazon Web Services (AWS).
Jay M. Patel
Chapter 8. Advanced Web Crawlers
Abstract
In this chapter, we will discuss a crawling framework called Scrapy and go through the steps necessary to crawl and upload the web crawl data to an S3 bucket.
Jay M. Patel
Backmatter
Metadata
Title
Getting Structured Data from the Internet
Author
Jay M. Patel
Copyright Year
2020
Publisher
Apress
Electronic ISBN
978-1-4842-6576-5
Print ISBN
978-1-4842-6575-8
DOI
https://doi.org/10.1007/978-1-4842-6576-5

Premium Partner