I've been mostly pleased with our use of python-polylith [1] with poetry in a production application. We output a webapp, python sdk, and CLI as separate "projects."
It doesn't _really_ solve python dependency/import issues. Instead, it helps to you keep your project in discrete chunks and well-organized. It also makes it easy to package up the separate projects as artifacts.
I've run into some issues with versioning separate projects but I suspect that is a matter of bandwidth rather than an actual, insoluble issue.
I'd use it again at a startup or on project where you need to ship a bunch of artifacts but don't have a lot of bandwidth.
Are there any comparisons of weather apps by area? For example, "for the San Francisco bay area, apple weather is most accurate on rain. But for NYC accuweather is better." I suppose you ought to be comparing weather APIs rather than apps but it would be most usable if you just knew which app to download.
I would be careful with LaTeX. I use to have a LaTeX resume generated with LuaTeX. At an old company, I saw my LaTeX resume in the ATS long after I was hired. Apparently, something happened and the PDF displayed as blurred-but-not-unreadable in the ATS. Maybe the ATS did some post-processing or used a limited PDF display engine? Lucky for me, the resume for that job was just a formality. These days, I just use Google Docs and export to PDF.
I don’t know of the example systems but I’ve applied to a handful of companies recently, all running the same-ish workday resume ingestion. You can actually tell which are running a more recent version because the parsing is more accurate.
There is also a common no-account single page application software, I checked at a company I’d applied to and it was called Lever(?)
Normally, I thought for ATS parsing if you upload in an application and a few of the prior experience text boxes are accurate, then you’re good. I’ve always had to fix my experiences though, even with using a word doc.
I would have liked to see them focus on less well-known companies. It seems quite possible that the reference human responses (from exponent) were already present in the training data.
I would be interested to hear more about how much they were discussing their failures/challenges with others. It sounds like this might be a case of, "I finally talked to someone about it and they unblocked me in an hour."
When I was in grad school I was very hesitant to ask others for help or feedback. Big mistake! I see similar things with interns: they'll wait until a daily or weekly check-in to raise problems.
My approach now is to set a time budget; if I don't figure it out myself within X hours, then I have to ask someone for help.
I would love to learn if TripleByte's data was sold off somewhere? I also had this concern (which was probably a bit overblown). One of the main scenarios I was concerned about was a case where the firm was sold off for parts.
You could check out ethical ads [1]. I have no first-hand experience with them as a publisher. But they are fairly unobtrusive on readthedocs [2] sites.