What is Web Crawling?

Introduction

Web crawling can be a very complicated and technical subject to understand.  Every web page on the Internet is different from the next, which means every web crawler is different (at least in some way) from the next.

We do a lot of web crawling to collect the data you see in Datafiniti.  In order to help our users get a better understanding of how this process works, we’re embarking on an extensive series of posts to provide better insight into what a web crawler is, how it works, how it can be used, and the challenges involved.

Here are the posts we have planned:

  1. What is Web Crawling?
  2. Typical use cases for web crawlers
  3. Different data formats for storing data from a web crawl: CSV, JSON, and Databases
  4. Techniques for scraping data
  5. How is a web crawler different from a search engine
  6. Making sure a web crawler behaves well
  7. How to use JSON data
  8. Challenges with scraping data
  9. Web crawling use cases: collecting pricing data
  10. Web crawling use cases: collecting business reviews
  11. Web crawling use cases: collecting product reviews
  12. Comparison of different web crawlers

So let’s get started!

The Web Page, Deconstructed

We actually need to define what a web page is before we can really understand how a web crawler works.  A lot of people think of a web page as what they see in their browser window, which is right, but that’s not what a web page is when a web crawler sees it.  So let’s look at a web page like a web crawler.

When you see http://www.cnn.com, you see something like this:

 

Living News   Personal Wellness  Love Life  Work Balance and Home Style   CNN.com

 

In fact, what you are seeing is the combination of many different “resources”, which your web browser is combining together to show you the page you see.  Here’s an abridged version of what happens:

  1. You type in “http://www.cnn.com”.
  2. Your browser says ok, let me GET “http://www.cnn.com”.
  3. CNN’s server says, hey browser, here’s the content for that page.  At this point, the browser is only returning the HTML source code of “http://www.cnn.com”, which looks something like this:
    html_source
  4. Your browser looks through this code and notices a few things.  It notices there are a few style resources needed.  It also notices there are several image resources needed.
  5. The browser now says, I need to GET all of these resources as well.
  6. Once all the resources for the page are received, it combines them all and displays the page you see.

This is what your browser does.  A web crawler can get all the same resources, but if you tell it to GET “http://www.cnn.com”, it will only fetch the HTML source code.  That’s all it knows about it until you tell it do something else (possibly with the information in the HTML).  By the way, “GET” is the actual technical term for the type of request being made by the crawler and your browser.

A Very Basic Web Crawler

Alright, so now that we understand that requesting “http://www.cnn.com” will only return HTML source code, let’s see what we can do with that.

Let’s imagine our web crawler as a little app.  When you start this app, it asks you for what web page you want to crawl.  That’s its only input: a list of URLs, or in this case, a list containing 1 URL.

You enter “http://www.cnn.com”.  At this point, the web crawler gets the HTML source code of this URL.  The HTML is like a very long piece of semi-structured text.  It’s going to write that text to a separate file.  Just to make it easy on us, the web crawler will also write which URL belongs to this source code.

The whole thing can be visualized like this:

What is Web Crawling Illustration 1

A Slightly More Complicated Web Crawler

So the web crawler can’t do much right now, but it can do the basic thing any web crawler needs to do, which is to get content from a URL.  Now we need to expand it to get more than 1 URL.

There are two ways we can do this.  First, we can supply more than 1 URL in our URL list as input.  The web crawler would then iterate through each URL in this list, and write all the data to the same log file, like so:

What is Web Crawling Illustration 2

Another way would be to use the HTML source code from each URL as a way to find the next set of URLs to crawl.  If you look at the HTML source code for any page, you’ll find several references to anchor tags, which look like <a href=””>some text</a>.  These are the links you see on a web page, and they can tell the web crawler where other URLs are.

So all we need to do now is extract the URLs of those links and then feed those in as a new URL list to the app, like so:

What is Web Crawling Illustration 3

In fact, this is how web crawlers for search engines typically work.  They start with a list of “top-level domains” (e.g., cnn.com, facebook.com, etc.) as their URL list, step through that list, and then crawl to all the links found on the pages they crawl.

So What’s the Purpose of the Web Crawler?

We now have the conceptual understanding of what a typical web crawler does, but it may not be clear what it’s real purpose is.

The ultimate purpose of any web crawler is to collect content or data from the web.  “Content” or “data” can mean a wide variety of things, including everything from the full HTML source code of every URL requested, or even just a yes/no if a specific keyword exists on a page.  In our next blog post, we’ll cover some common use cases, and expand upon how our conceptual “web crawling app” we’ve described here could be expanded to fit those use cases.

Want to Try Web Crawling Yourself?

If you’re interested in trying to run your web crawls, we recommend using 80legs.  It’s the same platform we use to run crawls for Datafiniti.

 

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email

New User Agent for 80legs

On Thursday, July 17th, we’ll be changing the user-agent for the 80legs crawler form “008″ to “voltron”.

We recognize that changing the user-agent for our web crawler could potentially be controversial, but in this case we feel it’s strongly warranted.  Over 4 months ago, we launched a completely new back-end for 80legs.  Although we still call the system “80legs”, in reality it’s a completely different web crawler.  One of the biggest features of the new crawler is that it’s considerably better about crawling websites respectfully.  In fact, we haven’t received a single complaint from webmasters since we launched the new crawler.

With this change, the 80legs crawler will now only obey robots.txt directives for the “voltron” user-agent.  It will ignore directives for the “008″ user-agent.  We feel this change in behavior is appropriate, as it gives our users the chance to crawl websites inaccessible to the old crawler while still giving webmasters the opportunity to control traffic coming from the new crawler.

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email

Quality That Scales

Seeing the Wall Before It Hits

As we begin to grow the volume of data coming into Datafiniti, data quality is becoming an increasingly important part of our operations. With over 1 million records coming in every day, making quality control (QC) automated is critical.

We recognized this need and the challenges it presented earlier this year. At that time, our data team met to discuss each person’s “ideal” QC platform. We identified the following characteristics as being absolutely essential:

  1. The platform should let a developer run fixes to highly-targeted sections or broad cross-sections of the data.
  2. If a developer implemented new QC logic or updated existing logic, that work should be applied to the entire system. No writing code twice.
  3. Any developer on our team should be able to work on the platform. Easy setup, testing, and deployment was a must.

Ultimately, the goal is scale. Not scale in the sense of the amount of data we can look at. That’s already been done. We needed scale in the sense of how our developers work. With dozens of attributes for millions of records, building out data QC for everything was always going to be hard. We needed to do whatever we could to make it easy on us.

A Vision Realized

Over the next 3 months, our team started building out this idea of a scalable QC platform. It’s now July, and we’re incredibly excited to start rolling out this platform to address and dramatically improve the quality of the data you see in Datafiniti.

This new QC platform addresses all of the goals outlined above. We use a single “base” application that serves as an integrator between a set of QC modules and other aspects of our data operations. Each QC module acts as a set of instructions to validate and fix any issues with a single attribute. So for example, there’s a module for business addresses, one for product names, and so on. Our developers work on individual modules, and “plug” them into the base application. When this happens, everything else in our data pipeline uses this QC logic. Any new QC projects will use it, our import will use it, and even random scripts can use it.

Now We Make It Real

With our QC platform in place, we’ve begun rolling out fixes to various “hot spots” in Datafiniti. Initial projects include:

  • Removing incomplete or corrupted business reviews
  • Fixing inaccurate business names
  • Removing invalid UPC codes

If there are quality improvements you’d like to see, please let us know! Your feedback is invaluable as always.

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email

A Few Charts to Show Off Our Web Crawling

We’ve begun scaling out the new, Voltron-powered 80legs.  I wanted to take a few minutes to show off a few charts that illustrate our ability to scale web crawling, which ultimately means more data coming into Datafiniti.

1. Here’s how many computers we’ve used for our web crawling (click to expand):

total_nodes

2. Here’s how many computers we’re using at any given time to run web crawls:

active_nodes

3. Here’s how many URLs we’re crawling each second:

urls_crawled

So things are rolling along pretty well on the crawling front.  If we extrapolate the data on the last chart, our current peak monthly web crawling capacity is over 300 million URLs.  And we’re just getting started.

This has already had an impact on how much data is coming into Datafiniti.  I’ll be sharing some pretty charts on imports in the near future.

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email

Early Schedule for Increasing Data Volume

It’s a bit late, but here’s an early schedule for increasing the amount of data coming into Datafiniti on a daily and monthly basis.  As you may know, we have several types of crawls we run to bring data into our search engine.  Daily crawls help keep data fresh, while comprehensive crawls make sure everything is included in our index.  Now that we’ve brought our new crawling system online, we’re working on scaling up the number of daily and comprehensive crawls so you can enjoy better data from us.

Anyway, here’s an early draft of the schedule!

DATE # OF DAILY CRAWLS # OF COMPREHENSIVE CRAWLS
May 1, 2014 10 5
June 1, 2014 25 10
July 1, 2014 50 25
August 1, 2014 75 50
September 1, 2014 100 75

The exact websites and order in which we include them in our index will depend on a variety of factors, including customer demand, how easy it is to crawl the site, and so on.  The exact rollout schedule is flexible, but we’ll post updates on how we’re doing each month.

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email

Are Republicans Better Tippers?

I stumbled across this little post over at Quartz (via Consumerist) about “Which US States Tip the Most”. After doing a quick glance over the data, something quickly jumped out at me. Many southern and conservative states seem to be bigger tippers. So I thought it would be fun to map the data in the post against the concentration of Republican voters in each state. Using data from Wikipedia, here’s what I came up with:

republican tippers

 

And here’s the data:

State  Average Tip %  Republican
Utah                              16.1                    72.8
Wyoming                              15.4                    68.6
Oklahoma                              16.2                    66.8
Idaho                              16.5                    64.5
West Virginia                              16.7                    62.3
Arkansas                              16.9                    60.6
Alabama                              16.4                    60.6
Kentucky                              16.4                    60.5
Nebraska                              15.5                    59.8
Kansas                              16.2                    59.7
Tennessee                              16.3                    59.5
North Dakota                              15.6                    58.3
South Dakota                              15.3                    57.9
Louisiana                              16.1                    57.8
Texas                              16.3                    57.2
Montana                              16.0                    55.4
Mississippi                              16.5                    55.3
Alaska                              17.0                    54.8
South Carolina                              16.7                    54.6
Indiana                              16.4                    54.1
Missouri                              16.5                    53.8
Arizona                              16.5                    53.7
Georgia                              16.2                    53.3
North Carolina                              16.7                    50.4
Florida                              16.2                    49.1
Ohio                              16.1                    47.7
Virginia                              16.0                    47.3
Pennsylvania                              16.0                    46.6
New Hampshire                              16.2                    46.4
Iowa                              16.1                    46.2
Colorado                              16.5                    46.1
Wisconsin                              15.9                    45.9
Nevada                              16.2                    45.7
Minnesota                              15.7                    45.0
Michigan                              16.4                    44.7
New Mexico                              16.6                    42.8
Oregon                              15.7                    42.2
Washington                              15.9                    41.3
Maine                              16.4                    41.0
Connecticut                              15.6                    40.7
Illinois                              16.5                    40.7
New Jersey                              16.1                    40.6
Delaware                              14.0                    40.0
Massachusetts                              15.7                    37.5
California                              15.5                    37.1
Maryland                              15.8                    35.9
Rhode Island                              15.8                    35.2
New York                              15.8                    35.2
Vermont                              15.5                    31.0
Hawaii                              15.1                    27.8

Of course, my guess is that there are a ton of confounding variables at play here, but there is some trend here.  At the very least, the data is most likely counter-intuitive to many people’s stereotypes!

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email

4 Reasons You Should Use JSON Instead of CSV

Do you deal with large volumes of data?  Does your data contain hierarchical information (e.g., multiple reviews for a single product)?  Then you need to be using JSON as your go-to data format instead of CSV.

We offer CSV views when downloading data from Datafiniti for the sake of convenience, but we always encourage users to use the JSON views.  Check out these reasons to see how your data pipeline can benefit from making the switch.

1. JSON is better at showing hierarchical / relational data

Consider a single business record in Datafiniti.  Here’s a breakdown of the fields you might see

  • Business name
  • Business address
  • A list of categories
  • A list of reviews (each with a date, user, rating, title, text, and source)

Now consider a list of these product records.  Each product will have a different number of prices and reviews.

Here’s how some sample data would look like in CSV (Datafiniti link):

And here’s that same data in JSON (Datafiniti link):

The JSON view looks so much better, right?

2. CSV will lose data

If you look closely at the CSV data above, you’ll notice that we have a set number of prices and reviews for each product.  This is because we’re forced to make some cut-off for how many prices and reviews we show.  If we didn’t, each row would have a different number of columns, which would make parsing the data next to impossible.  Unfortunately, many products have dozens or even hundreds of prices and reviews.  This means you end up losing a lot of valuable data by using the CSV view.

3. The standard CSV reader application (Excel) is terrible

Excel is great for loading small, highly-structured spreadsheet files.  It’s terrible at loading files that may have 10,000 rows, 100+ columns, with some of these columns populated by unstructured text like reviews or descriptions.  It turns out that Excel does not follow CSV-formatting standards, so even though we properly encode all the characters, Excel doesn’t know how to read that.  This results in some fields spilling over into adjacent columns, which makes the data unreadable.

4. JSON is easier to work with at scale

Without question, JSON is the de-facto choice when working with data at scale.  Most modern APIs are RESTful, and therefore natively support JSON input and output.  Several database technologies (including most NoSQL variations) support it.  It’s significantly easier to work with within most programming languages as well.  Just take a look at this simple PHP code for working with some JSON from Datafiniti:

Further Reading

Check out these helpful links to get more familiar with JSON:

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email

Meet the Datafiniti Crew During SXSW

sxsw-2014

If you’re in Austin during SXSW, we’d love to meet you!  Here are a few ways you can meet the team behind the search engine for data:

SXSW Startup Crawl

We’ll be at the Omni Hotel during the annual SXSW Startup Crawl.  Come by our table on the first floor and pick up a Datafiniti t-shirt and sticker!  A few team members will be there to answer your questions.  You can register for the crawl here.

Come By Our Office

Our office is located at 904 West Ave, Ste. 109, Austin TX, 78701.  Let us know if you’d like to swing by!  We’re a short pedicab ride from the convention center and just far enough to feel like you’ve escaped the craziness.

Schedule a Time to Meet

If you’d like to setup a specific time to meet, please contact us.  We’ll be more than happy to find some time to meet and discuss your data needs.

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email

New 80legs Rollout Schedule

As promised, here’s a detailed rollout schedule for the new, Voltron-powered 80legs:

voltron update   Google Drive

 

Here are some key dates outlined:

  • March 1: We will begin on-boarding 80legs customers onto the new system.  At this time, we’ll also begin on-boarding internal daily crawls for Datafiniti.  Initially, customers will only have access to the new 80legs API.  There will be no website for the new 80legs at this point.
  • April 15: All 80legs customers will be on-boarded to the new 80legs.  We hope to have a website for the new 80legs by this time, but we are still in the process of confirming this delivery date for the website.
  • May 1: The legacy 80legs will be retired and no longer available.

We will provide detailed instructions to affected customers ahead of these dates.

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email

How we’re building the future of web crawling

Our web crawling platform, 80legs, was built over 4 years ago. A lot has changed since 2008. For starters, “big data” wasn’t even a term then, let alone a cliche. Today, there are a wide variety of technologies available for handling true big data. For this and other reasons, we’ve been secretly working on a massive overhaul to 80legs that promises to deliver the future of web crawling. We call it Voltron.

voltron

Built from the Ground Up

Voltron has been built from the ground up to take advantage of the latest technologies for storing, processing, and delivering massive amounts of data. Here are some quick highlights of the benefits you’ll see:

  • Auto-scaling infrastructure using cloud computing for reduced queue time and faster
    crawling
  • A RESTful API for more seamless integration
  • Moving from Java to Javascript for easier 80app development
  • Faster result delivery using global CDNs

The Rollout Schedule

Much of the alpha development for Voltron has been completed. Internal testing will begin in mid-February. Crawls used for Datafiniti data collection and those run by 80legs customers will be on-boarded in March. We expect to wrap up the on-boarding and final testing in April, with a shutdown of the legacy system by May. With any large software rollout, there may be unexpected hiccups, but we’ll be keeping everyone up-to-date on the latest developments as we progress.

How It Will Affect Datafiniti Users

Voltron will enable a significant increase in the amount of data made available to Datafiniti users. During the course of Voltron’s rollout, we will be scaling the number of “daily crawls” to select websites from 10 to 50 to 100 between February and May. Each daily crawl will collect data from 100,000 URLs from each select website. By May, Datafiniti will have over 10,000,000 business or product records updated each day. This means our customers will enjoy having daily-updated review and pricing data from the websites they are most interested in monitoring.

How It Will Affect 80legs Users

Many 80legs users have rightly felt frustration over crawl performance recently. This will change with Voltron. Intro, Plus, and Premium users will see queue times drop below 1 hour. Dedicated users will see 0 queue time. Crawl speed will improve as well, as several internal bottlenecks are being addressed by Voltron. In addition to performance improvements, we will be providing a new RESTful API and website that should make developing crawls much easier for everyone.

The Future of Web Crawling ..Soon!

We’re very excited to start using Voltron ourselves to feed Datafiniti and providing it to our 80legs customers for their own web crawling. Stay tuned for more updates as the future of web crawling takes shape!

Share

  • Facebook
  • Twitter
  • Google Plus
  • LinkedIn
  • Reddit
  • Email