It is a truth universally acknowleged that online banking websites are shit. Smile, for example. If you’re so infelicitous as to press the back button, or reload, or open a link in a new tab because that’s how HTTP is supposed to fucking work it logs you out. And apologises (sneeringly). If, on the login screen, you start typing your sort code, it “logs you out”. Because reasons. The less I have to deal with the website, the better my general mental health seems (although I’ll admit there are confounding factors here).
Scraping it is. Looks like scrapers is my thing of late. Introducing Stepford:
Give it your bank details1 and two or three minutes later it’ll spit your last 25 pages of transactions into a JSON or CSV file (or output them to the console lol).
My last attempt at a scraper manually downloaded pages and extracted stuff with Cheerio. That’s all well and good, but if you’re navigating through multiple pages and inputting data on each and clicking a bunch of links and well it turns out Zombie’s pretty good at that sort of thing. With a nice Promise-based API for clicking links and submit buttons and whatnot. Nifty.
Currently I’m working on a desktopish budgeting app that doesn’t work yet please god don’t try it it’ll break which makes use of a nice PouchDB wrapper around the scraper.
1 I’d promise I don’t do anything nefarious with them but then I’d be promising things the MIT licence doesn’t and you know I’d rather not. ⤣