Hacker News new | past | comments | ask | show | jobs | submit login
HTML to Excel data extraction (listly.io)
121 points by changmin on April 16, 2017 | hide | past | favorite | 21 comments



Thank you, all.

Listly.io is my private work built days ago. I hope to hear opinions if it is useful for you... or not.

Listly.io turn HTML to Excel in seconds without coding. It finds the pattern of repeated structure and extracts all of image links and texts. It does find not tags (table, ul ...), but the structure.

Ideally for developers, I think API would be the best way to adapt this extractor to other scraper or your own scraper.


This site is actually really awesome, and has worked for every website I've tried! My only slight issue with it however is it took me a few minutes to actually work out what "HTML codes" were and even then it was only from watching the video. Have you considered renaming it to something like "HTML Source Code"? It also seems to struggle on web pages it can't find tables, such as the following website I made which contains no information:

https://hastebin.com/eguluvoquq.html


I appreciate your hard testing and feedback. Your suggestion is very good to me.

Actually, any (partial or full) HTML source code is available; <div></div>, <p></p>, <span></span>,<html></html>, and etc. Following to your advice, I changed the placeholder description to "any HTML Source Code".

Secondly, my server returns 500 error only if there is nothing to extract such as your code. I will fix it soon. Thank you.


I think Google Spreadsheet had something similar

https://support.google.com/docs/answer/3093339?hl=en

I use it all the time for better sorting, filtering, etc.


I have used Google Spreadsheet to extract <TABLE> or <UL> content. It works very well with them.

Compared to it, listly.io works well with all types of tag if there are repeated structures.

In my experiment, it works well with hunderds kinds of web sites.

e.g. Google/Bing search result, Amazon/Walmart/Ebay product list, Twitter/Facebook/Tumblr posts, Twitch list, Bloomberg finance info, Threads of a forum, Instagram comments, and etc.


On macOS, you can literally copy and paste tables from Safari into Numbers. Numbers can export to Excel, if you need to.


Tried on Indeed jobs listing, Amazon product search, Craigslist and cannot get it to work. I suggest you test the tool with top 10-20 most popular websites that contain listing type data. Our company also did a little side project similar to yours and packaged it as Chrome extension. We learned that it is quite hard to make a unversal tool to guess where data is. Especially that so many websites use <div> and <ul> with CSS to form table like structures instead of plain <table>. If you want, take a look at our tool: https://chrome.google.com/webstore/detail/instant-data-scrap...


I tested now on Amazon product search, https://www.amazon.com/s/ref=nb_sb_noss_2/130-9531298-529675... . It works well though the result comes out slow (about 45 seconds). In result page, you can find the product list with the number of 28. For better speed, I agree to publish API or chrome extension.

In addition, it also works well in seconds with Craiglist apts / housing page. http://seoul.craigslist.co.kr/search/apa

Sorry for being slow. This is my private work. I could not predict a lot of new visitors, I need to scale up and out the server.


Looks like import.io.

https://www.import.io/

I tried writing a script to do the same thing before - turns out finding the element on the page with the most children and assuming each child is an entry works surprisingly often.


The difference:

Import.io needs user's click to determine what to extract, thus, the user has to repeat it whenever the web page changes.

Listly.io needs URL or HTML codes. It always works even if the web page chages.


I use this extension for tables. Gets the job done: https://chrome.google.com/webstore/detail/table-capture/iebp...


I think Chrome extension is the best way for end-users, too.


I'm guessing this is doing some kind of tree-diff on the DOM?

Now if you would have this generate a graphQL spec file, you could run a graphQL server acting as a proxy to lots of websites. That would be interesting. Not sure how that fares with the websites' owners' ToS though.


Thanks for the idea. It makes me think how to build API. I need to take a look at graphQL.


Yahoo also has yql, a sql-like language that can filter the Dom and return Json, XML or RSS.


Parseur is doing the same for emails. It's a bit more manual at first but it works better IMHO.

https://app.parseur.com


I use a chrome web scraping extension:

http://webscraper.io/


Falls flat on this very website.


To be fair to the guy, hn is among the worse in markup out there. Great lack of js and css, but boy oh boy is that html ugly.


outwit hub is my go to for a large, complex extraction: https://www.outwit.com/

their marketing is poor but the product is very powrful


This looks good, can't wait for the scraper feature.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: