{"id":2458,"date":"2010-06-04T22:00:00","date_gmt":"2010-06-04T22:00:00","guid":{"rendered":"http:\/\/www.b.shuttle.de\/hayek\/Hayek\/Jochen\/wp\/blog-en\/2010\/06\/04\/more-on-web-harvesting\/"},"modified":"2010-06-04T22:00:00","modified_gmt":"2010-06-04T22:00:00","slug":"more-on-web-harvesting","status":"publish","type":"post","link":"https:\/\/wp.jochen.hayek.name\/blog-en\/2010\/06\/04\/more-on-web-harvesting\/","title":{"rendered":"more on web harvesting"},"content":{"rendered":"<ul>\n<li>            <span><br \/>\n                <a href=\"http:\/\/www.rubyrailways.com\/data-extraction-for-web-20-screen-scraping-in-rubyrails-episode1\/\" rel=\"bookmark\" title=\"Permanent Link to Data Extraction for Web 2.0: \nScreen Scraping in Ruby\/Rails, Episode 1\"><br \/>\n                    Data Extraction for Web 2.0: Screen Scraping in<br \/>\nRuby\/Rails, Episode 1<\/a><\/span><\/li>\n<li><a href=\"http:\/\/scrubyt.org\/\">http:\/\/scrubyt.org<\/a> (ruby)<\/li>\n<li><a href=\"http:\/\/hpricot.com\/\">HPricot.com<\/a> : &#8220;a swift, liberal HTML parser with a fantastic library&#8221; (ruby)<\/li>\n<li><a href=\"http:\/\/brightplanet.com\/\">http:\/\/brightplanet.com<\/a> : &#8220;Pioneers in Harvesting the Deep Web&#8221;<\/li>\n<li>&#8230;\u00a0 <\/li>\n<\/ul>\n<p>\nUpdate 2010-06-05\/06:<br \/>\nOne night later I am still very impressed by <a href=\"http:\/\/scrubyt.org\/\">scrubyt<\/a>, and I rather want to try it on a real life example quite soon.<br \/>\nActually in a way scrubyt does, what I also do with my <a href=\"http:\/\/aleph-soft.com\/JHwis\/\">JHwis<\/a> toolkit, but of course, it looks, as if goes far (?!?) beyond that. JHwis navigates in a programmed way through web-sites, and it downloads certain HTML files to the disk for further processing. Those HTML files contain HTML tables, and there is already a nice PERL library, that I wrap into a command line utility, that extracts HTML tables into CSV files. These CSV files are actually not really of a kind, that you can directly load into a spreadsheet GUI utility like <i>OpenOffice Calc<\/i> or whatever. They need further mechanical processing and refinement, before they can get loaded into database tables.<br \/>\nWith scrubyt&#8217;s help (apparently) you extract an XML file from the quite nested HTML table structures of a web page.<br \/>\nYears ago, when I started my project I created CSV files. A couple of years later, I also created XML files. But I never adapted the entire tool chain to make use of these XML files.<br \/>\nMy XML files only reflect exactly the data, that I want to make use of.<br \/>\nscrubyt&#8217;s XML files reflect (I think) the entire table structure.<br \/>\nNowadays with XSLT processors you &#8220;easily&#8221; develop an XSL script (aka &#8220;stylesheet&#8221;), that extracts the portion, that you are really interested in.<br \/>\nTo be continued &#8230;\t\t\t\t<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Data Extraction for Web 2.0: Screen Scraping in Ruby\/Rails, Episode 1 http:\/\/scrubyt.org (ruby) HPricot.com : &#8220;a swift, liberal HTML parser with a fantastic library&#8221; (ruby) http:\/\/brightplanet.com : &#8220;Pioneers in Harvesting the Deep Web&#8221; &#8230;\u00a0 Update 2010-06-05\/06: One night later I am still very impressed by scrubyt, and I rather want to try it on a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_crdt_document":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_share_on_mastodon":"0"},"categories":[229,413,698],"tags":[],"class_list":["post-2458","post","type-post","status-publish","format-standard","hentry","category-http-scripting","category-page-scraping","category-web-harvesting"],"share_on_mastodon":{"url":"","error":""},"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/paO0kP-DE","jetpack_likes_enabled":true,"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/wp.jochen.hayek.name\/blog-en\/wp-json\/wp\/v2\/posts\/2458","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wp.jochen.hayek.name\/blog-en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wp.jochen.hayek.name\/blog-en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wp.jochen.hayek.name\/blog-en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wp.jochen.hayek.name\/blog-en\/wp-json\/wp\/v2\/comments?post=2458"}],"version-history":[{"count":0,"href":"https:\/\/wp.jochen.hayek.name\/blog-en\/wp-json\/wp\/v2\/posts\/2458\/revisions"}],"wp:attachment":[{"href":"https:\/\/wp.jochen.hayek.name\/blog-en\/wp-json\/wp\/v2\/media?parent=2458"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wp.jochen.hayek.name\/blog-en\/wp-json\/wp\/v2\/categories?post=2458"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wp.jochen.hayek.name\/blog-en\/wp-json\/wp\/v2\/tags?post=2458"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}