The vaccine situation in Veneto is complex. It's not "first come, first served". There are different cohorts. Right now the only people that are eligible are people over 80 and people with pre-existing health conditions.
Many of them have troubles using the official portal, and they ask their children to help them. Their children are around 50, and have to actively poll the website trying to get a spot.
I didn't build the bot for young, tech savvy people who want to get first in the line. It's exactly the opposite. By smoothing the UX it makes booking the spot for the vaccine accessible, and it removes the burden of constantly check it. It may also help in avoiding wasting doses by sending last minute notifications about new available spots.
I have few emails from people thanking me because they were frustrated by the official website, and the bot helped them to book a spot for their parents.
I'm not a lawyer but "codice fiscale" is not PHI. The Italian Data Protection Authority puts codice fiscale under PII[0] and it's not mentioned in the PHI section[1,2,3,4].
First off, good work! It is always nice to help people deal on this (sicked and badthought) platforms.
Anyhow, you should check also if a "tessera sanitaria" is considered a PHI since it includes the "codice fiscale" (and if I recall correctly, it is used often in "fascicoli sanitari" to identify an individual).
The thing that backend devs don't realize is that frontend is not easy. A frontend dev builds something that will be used by a HUMAN, while a backend dev builds something that will be used by a PROGRAM. Related to the comment, things were changing a lot because there is not one single way to do things, as in iOS/Android. There is no standard imposed by a huge private company like Apple or Google. Everyone is free to reinvent the wheel.
Facebook did something super valuable, that is ReactJS, and this is becoming the _de facto standard_. ReactJS is simple, opinionated, and supported by a large corporation. Thanks to ReactJS a new ecosystem was born, and now finally you have many projects gravitating around it, and this is good. It's just a matter of time IMO.
I strongly disagree with _"the browser is a bad app platform and javascript is a bad language"_. JavaScript has its quirks, but it's also really plesant to work with if you don't do weird shit like `isNaN('wft' - 1) // it's true`. The browser is a great virtual machine that is democratizing the web. If you are doing things right, you can have graceful degradation (it degrade gracefully to a lower level of user in experience in older browsers) to increase the audience of your web app, from the dev in SF with gigabit internet to someone living in a rural area in a not so tech advanced country. This of course has a cost, that is spending time developing your app.
What's so unusual about the current web platform is that none of the technologies involved were actually created for the task they are being used for right now.
HTML wasn't meant to build interactive application UIs. CSS wasn't meant to do layouting (or if it was it was done so badly that the intention is unrecognisable). HTTP wasn't meant to carry fine grained API calls. JavaScript wasn't meant to do large scale application development.
It's a long list of wrong tools for the job that are being cobbled together to supposedly form the right tool for the job. It's working to some degree but it's messy.
That's not to say that creating a technology for a specific purpose guarantees that it will work well. EJB, SOAP, WS-*, there are many examples of failed purpose built technologies.
But having an entire platform built exclusively from components that are unfit for purpose isn't something that happens very often.
As a fullstack dev, it amuses me when my underlings claim that backend is hard and they want to learn it. They're already doing front-end. (And yes, we're working them towards more and more backend, and even mobile.)
It's just different, IMO. Programming is programming, but different specialties have different things you need to worry about.
Absolutely. I've been doing "frontend" work before it was even a term (ie. html tables for layout), and it's still the #1 thorn in my side. Even with all the tools available, developing a functional, aesthetically pleasing GUI is not an easy task.
Plus, organizing your code so it's not a 15-nested-div mess with inline CSS just to make the damn thing look right.
It's probably why most of my personal projects have a CLI, and that's it.
"The thing that backend devs don't realize is that frontend is not easy." - I think that's a false dichotomy, one that describes no professional programmer I have ever encountered except the most junior, or the most foolish of hopefully-soon-to-be-ex colleagues. In my experience it is product managers, architects with a mostly-infrastructure background, and nontechnical stakeholders, who tend to assume that frontend is easy. All developers who prefer to work in the backend will have built a user interface at some time, and learned that programmatically fulfilling human needs and intentions is a tough ask.
"ReactJS ... is becoming the _de facto standard_ ... and this is good" - strong disagree, standards that aren't standards that are controlled by a proprietary actor have rarely been a good thing. The woes of dealing with Microsoft Office file formats, or Adobe Flash, or Google Reader pay testament. Heck, QWERTY keyboards. Or Edison's AC/DC feud with Westinghouse, a struggle over proprietary preferences that can be directly blamed for the extended length of my tea breaks when visiting the US.
I feel like there's something missing from core ES, something conceptual that would encourage an ecosystem to develop that isn't framework-specific. We just don't know what that is, yet.
I've been a web developer for 15 years now. Building the UI is always hard and where the waste of time is felt the most. That's why i prefer to work on the backend as much as i can.
I'm against the front x back end split. I think any web dev need to know html, css and js. That's mandatory. Besides that, you should definetely learn the server side too, specially databases, since they'are by far the biggest bottleneck and performance hog you'll need to optimize.
But I'm getting old and I know that because I think everything new is stupid. Once upon a time, we had RAD tools to build UI's, but unfortunately, that's lost in the past...
I'm the main developer of the extension. We use the public post to connect your Facebook profile with your user in our database. If you want you can just publish the public key without anything else. After we validate your key, you can remove the post.
If you have any suggestion on how to improve feel free to open an issue:
The thing is that at the beginning we were just three friends trying to develop a simple webapp. The domain hack (.st) was to keep our URLs short. The .com domain was parked. As you wrote, "I was happy with that situation". Well, I was not happy but I though it was not so important.
Now we are a company and things are different.
My fault was to not act directly against it. The urlist.com guy did a fake landing page with our contents before, now he is linking to a webapp related to bookmarking: he know what he is doing.
I was wondering if anyone has ever experienced a problem like this one, and what he/she have done to solve it :)
Dear HN,
since I'm a little bit disappointed with the newtwitter newlayout, i made a tiny greasemonkey script to fix it, and I'm sharing it with you.
The script puts the timeline on the left and fixes also the ">" icons con the menu sidebar (using "transform: rotate(180deg)") for your viewing pleasure.
1. install greasemonkey (skip this step if you are using Chrome)
I don't want to seem inquisitive, but it looks a lot like VIM. I thought it was VIM, or a human-friendly version of it.
I took a look to http://www.sublimetext.com/features and many of the features listed are available on a vanilla VIM. Many, if not all, except "Asynchronous file loading, so you're never blocked when loading files off slow network drives", "WinSCP integration for editing remote files via SCP and FTP" (OK, maybe you can have the last one using sshfs).
[edited, I've removed "Commenting and uncommenting blocks of text" from the features vim does not have.]
I think the direct way with scp URIs is faster in regard to the setup time as you don't have to setup fuse and fiddle with permissions.
I must admit i don't use it regularly (but i use tramp mode quite often) but you can browse remote directories, change between hosts without mounting a directory first or open files not lying in the current mountpoint. The only major drawback is if you don't use ssh keys or use programs which don't support gvfs/kio
In my experience, even if you are building a quick'n'dirty prototype, you'll always need to fire up a terminal or run the GUI to interact/play with the prototype you are building, to see if it "works".
In this case you may apply a --this is my very own definition-- "lightweight TDD approach" just to automate all the procedures that you normally do to test your prototype. Moreover, since you are writing a test for something that does not exist, it may help you defining better what you want from your experimental code, because you are using your prototype before it even exists :)
TDD takes longer. Studies conducted at Microsoft and HP (or possibly IBM), found that development time with TDD was 10-30% longer. That's probably minimum, and in high degree determined by the TDD experience level of the team members.
The massive upside is that there is much less defects in the end product (think it was 50-80% in the study, but can't recall exactly).
Depending on the project, the extra time needed, might or might not be worth it.
Take a somewhat complex web app, that is used for a marketing campaign. It has a life time of only 2 months. In this case it does not make sense to use TDD, as the lower defect rate and "future-proofing" qualities, might very well not be worth an increased budget.
Yes. My experience with TDD is that it did save time, but not necessarily in initial development. The long-term effect of fewer defects combined with the capability for a higher release velocity is a net savings.
As a counter example, I'm working with a company that uses TDD for their current development, but employees work on a platform with little test coverage. As a consequence of lacking unit tests, they're only able to release a few versions each year because their manual testing burden is so high (estimated by one manager as about half of the cost of a release). They've found that TDD for their new features doesn't escalate the QA cost as significantly as not doing it. Bottom line: it's a little more time up front, but they've found it to be a long-term gain.
I strongly agree with all the 5 rules Randall pointed out. But, IMHO, the very first step to really understand those rules is to dramatically fail on every one.
[this is my first comment on HN! :)]
The vaccine situation in Veneto is complex. It's not "first come, first served". There are different cohorts. Right now the only people that are eligible are people over 80 and people with pre-existing health conditions.
Many of them have troubles using the official portal, and they ask their children to help them. Their children are around 50, and have to actively poll the website trying to get a spot.
I didn't build the bot for young, tech savvy people who want to get first in the line. It's exactly the opposite. By smoothing the UX it makes booking the spot for the vaccine accessible, and it removes the burden of constantly check it. It may also help in avoiding wasting doses by sending last minute notifications about new available spots.
I have few emails from people thanking me because they were frustrated by the official website, and the bot helped them to book a spot for their parents.
[edit: add whoami]