If you ever find yourself scraping an ASP.Net page where you need to submit data through a form, this post might come in handy. is the tool that I’m using for this tutorial and it is an web crawling framework. Scrapy open source Dealing with ASP.Net Pages, PostBacks and View States Websites built using ASP.Net technologies are typically a nightmare for web scraping developers, mostly due to the way they handle forms. These types of websites usually send state data in requests and responses in order to keep track of the client’s UI state. Think about those websites where you register by going through many pages while filling your data in HTML forms. An ASP.Net website would typically store the data that you filled out in the previous pages in a hidden field called “__VIEWSTATE” which contains a huge string like the one shown below: I’m not kidding, it’s huge! (dozens of kB sometimes) This is a Base64 encoded string representing the client UI state and contains the values from the form. This setup is particularly common for web applications where user actions in forms trigger POST requests back to the server to fetch data for other fields. The __VIEWSTATE field is passed around with each POST request that the browser makes to the server. The server then decodes and loads the client’s UI state from this data, performs some processing, computes the value for the new view state based on the new values and renders the resulting page with the new view state as a hidden field. If the __VIEWSTATE is not sent back to the server, you are probably going to see a blank form as a result because the server completely lost the client’s UI state. So, in order to crawl pages resulting from forms like this, you have to make sure that your crawler is sending this state data with its requests, otherwise the page will not load what it’s expected to load. Here’s a concrete example so that you can see firsthand how to handle these types of situations. Scraping a Website Based on ViewState The scraping guinea pig today is . This website lists quotes from famous people and its search page allows you to filter quotes by author and tag: quotes.toscrape.com/search.aspx A change in the field fires up a POST request to the server to fill the select box with the tags that are related to the selected author. Clicking brings up any quotes that fit the tag from the selected author: Author Tag Search In order to scrape these quotes, our spider has to simulate the user interaction of selecting an author, a tag and submitting the form. Take a closer look at each step of this flow by using the that you can access through your browser’s Developer Tools. First, visit and then load the tool by pressing F12 or Ctrl+Shift+I (if you are using Chrome) and clicking on the Network tab. Network Panel quotes.toscrape.com/search.aspx Select an author from the list and you will see that a request to “/filter.aspx” has been made. Clicking on the resource name (filter.aspx) leads you to the request details where you can see that your browser sent the author you’ve selected along with the __VIEWSTATE data that was in the original response from the server. Choose a tag and click Search. You will see that your browser sent the values selected in the form along with a __VIEWSTATE value different from the previous one. This is because the server included some new information in the view state when you selected the author. Now you just need to build a spider that does the exact same thing that your browser did. Building your Spider Here are the steps that your spider should follow: Fetch quotes.toscrape.com/search.aspx 2. For each found in the form’s authors list: Author Create a POST request to /filter.aspx passing the selected and the __VIEWSTATE value Author 3. For each found in the resulting page: Tag Issue a POST request to /filter.aspx passing the selected , selected and view state Author Tag 4. Scrape the resulting pages Coding the Spider Here’s the spider I developed to scrape the quotes from the website, following the steps just described: scrapy import SpidyQuotesViewStateSpider(scrapy.Spider):name = 'spidyquotes-viewstate'start_urls = ['http://quotes.toscrape.com/search.aspx']download_delay = 1.5 class **def** parse(self, response): **for** author **in** response.css('select#author > option ::attr(value)').extract(): **yield** scrapy.FormRequest( 'http://quotes.toscrape.com/filter.aspx', formdata={ 'author': author, '\_\_VIEWSTATE': response.css('input#\_\_VIEWSTATE::attr(value)').extract\_first() }, callback=self.parse\_tags ) **def** parse\_tags(self, response): **for** tag **in** response.css('select#tag > option ::attr(value)').extract(): **yield** scrapy.FormRequest( 'http://quotes.toscrape.com/filter.aspx', formdata={ 'author': response.css( 'select#author > option\[selected\] ::attr(value)' ).extract\_first(), 'tag': tag, '\_\_VIEWSTATE': response.css('input#\_\_VIEWSTATE::attr(value)').extract\_first() }, callback=self.parse\_results, ) **def** parse\_results(self, response): **for** quote **in** response.css("div.quote"): **yield** { 'quote': response.css('span.content ::text').extract\_first(), 'author': response.css('span.author ::text').extract\_first(), 'tag': response.css('span.tag ::text').extract\_first(), } is done by Scrapy, which reads start_urls and generates a GET request to /search.aspx. Step 1 The parse() method is in charge of . It iterates over the found in the first select box and creates a to /filter.aspx for each , simulating if the user had clicked over every element on the list. It is important to note that the parse() method is reading the __VIEWSTATE field from the form that it receives and passing it back to the server, so that the server can keep track of where we are in the page flow. Step 2 Authors FormRequest Author is handled by the parse_tags() method. It’s pretty similar to the parse() method as it extracts the listed and creates POST requests passing each , the selected in the previous step and the __VIEWSTATE received from the server. Step 3 Tags Tag Author Finally, in the parse_results() method parses the list of quotes presented by the page and generates items from them. Step 4 Simplifying your Spider Using FormRequest.from_response() You may have noticed that before sending a POST request to the server, our spider extracts the pre-filled values that came in the form it received from the server and includes these values in the request it’s going to create. We don’t need to manually code this since provides the method. This method reads the response object and creates a that automatically includes all the pre-filled values from the form, along with the hidden ones. This is how our spider’s parse_tags() method looks: Scrapy FormRequest.from_response() FormRequest parse_tags(self, response): tag response.css('select#tag > option ::attr(value)').extract(): scrapy.FormRequest.from_response(response,formdata={'tag': tag},callback=self.parse_results,) def for in yield So, whenever you are dealing with forms containing some hidden fields and pre-filled values, use the method because your code will look much cleaner. from_response Wrap Up You can read more about . I’m always on the lookout for new web scraping hacks to cover, so if you have any obstacles that you’ve faced while scraping the web, please let me know in the comments below or feel free to reach out on or . ViewStates here Twitter Facebook As a heads up, is a forever free web scraping platform that lets you scale and manage your crawlers. If you’re looking to deploy the spiders you built in this tutorial, give Scrapy Cloud a try. Scrapy Cloud This post was written by Valdir Stumm( ), a developer at Scrapinghub. @stummjr Please heart the “Recommend” so that others can learn more web scraping tips. Learn more about what web scraping and web data can do for you . Originally published on the . Scrapinghub blog