Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What interesting problems are you working on?
442 points by jlevers on Jan 28, 2020 | hide | past | favorite | 689 comments
I know there are lots of really interesting problems out there waiting to be solved, but I haven't been exposed to much in the software world besides web technologies.

I'd love to hear about what interesting problems (technically or otherwise) you're working on -- and if you're willing to share more, I'm curious how you ended up working on them.

Thank you :)




I'm helping to build a scalable system for delivering high value and life saving medical supplies to hard to reach places via autonomous aircraft. The system is currently operating in Rwanda and Ghana, and aggressively expanding over the next couple years.

Specifically, I spend a lot of time thinking about and writing embedded software. The aircraft is fully autonomous and needs to be able to fly home safely even after suffering major component failures. I split my time between improving core architectural components, implementing new features, designing abstractions, and adding testing and tooling to make our ever-growing team work more efficiently.

I did FIRST robotics in high school where I mainly focused on controller firmware. I studied computer science in college while building the embedded electronics for solar powered race cars, and also worked part time on research projects at Toyota. After graduating with a Master's degree, I stumbled into a job at SpaceX where I worked a lot on the software for cargo Dragon, then built out a platform for microcontroller firmware development. I decided to leave SpaceX while happy, and spent a couple years working on the self driving car prototype (the one that looked like a koala) at Google. Coming up on my third year, I was itching for something less comfortable and decided to join a scrappy small startup with a heart of gold. Now it's in the hundreds of employees and getting crazier and crazier.


It's crazy the kind of jobs there are in the USA that are very interesting and in the vanguard of the technology while me (Spain) get to work on banking. Here we don't have anything close to Toyota, Google or SpaceX. The career path for someone that has a passion for its craft can't be compared between the USA and many other countries. Such a shame...

I wouldn't say I am qualified for a job related to embedded programming (even though I know how to code and it's my job) but even if I was there wouldn't be any opportunity for me to bounce between companies like those in a million years.

PS: Sorry for the spelling, not native.


To be clear, I think the overwhelming majority of jobs in the US fall into your uninteresting category, e.g. banking, adtech, etc.


There's a handful of places in the USA with thriving tech industry, and there's plenty of opportunity in general, but quite often people have to relocate for the really exciting opportunities. I haven't been to Spain, but my understanding of European culture is that folk generally stick around where they grew up. There's plenty of cool embedded stuff going on in Europe if you look for it. For example, I am aware of a lot of really neat drive-by-wire actuator development.


I feel you, unfortunately, your location matters in terms of your network/opportunities and even a spouse. Frankly, because I realized this I started searching for cool companies to work for that nobody knows and made a side project around this!


Are you working on Zipline [1] ?

[1]: https://www.youtube.com/watch?v=jEbRVNxL44c


Pro Tip: Search "<HN handle> + github" frequently gets a real name, and if not current job, then look on linkedin.

The answer to your question is: Yes.


This is incredible and gives me engineering fomo!

* higher purpose project saving lives and helping people: check

* well-engineered, reliable, smart solutions quadrupling efficiency: check

* planes, Africa, autonomous flight, landing and take off hacks, app based pre-flight checks, oh my!

* all for peaceful, life-supporting, humane purposes!!!


That's what I thought of as well. Great video explanation for anyone interested in learning more.


Just finished watching the video. Wow. Incredible.


You have basically had, in my opinion, the perfect career so far. Well done! :)


That's exactly the type of project I'd like to be working on -- embedded programming that literally saves lives. Thank you for sharing.

Your path to where you are now is crazy...sounds like you've had an exciting career!


That's awesome! Where I can learn more? I am building a job marketplace and curator for cool "unknown" companies that people can explore and would love to feature this! A plus point for the impact part of it!


I've been to your airfield in Rwanda! It's such a great company.


I'm jealous, I haven't actually made it out myself yet! I've got a little baby to come home to. It's been a very supportive company in terms of work/life balance as well.


Totally understand. I have to say that watching the drones getting captured when landing is one of the most futuristic things I've seen. You guys have done some great things in Rwanda - I lived there over the past 4 years, so I have firsthand knowledge.


By "hard to reach places", are you referring to difficulty due to geographic positioning, or geopolitical concerns?


Mainly geographic positioning due to lack of road infrastructure and reliable utilities. Roads are massive capital investments regardless of tech ability, and lots of populated places in the world are still hard to get to quickly. Many medical products are generally available but have a very short shelf life, and so our delivery service helps make them reachable to significantly more people.


Mind if I contact you and ask you questions about this? I’m interested.


Sure, or just ask them here if you think they would be of general interest.


Going to piggyback on this comment, sorry I am a few days late.

Where did you get your masters? I have an EE & CS B.S. from RPI plus 3 years of application development experience in the Fin Tech industry. I am strongly considering swapping industries to Embedded Control--that is what I enjoyed most in college--but I am unsure how to break into the industry. Do you recommend a masters or just sending some apps out? I have a good deal of C++ and micro-controller experience, but none commercial.


I got my bachelors and masters from University of Michigan. It was really just an excuse to stick around for a couple more semesters and do another solar car race. After a few years of experience, the masters doesn't really matter beyond what you personally gained out of the education.

There are tons of embedded software projects that lack software engineering rigor. If you're good at unit testing and mocking, for example, there's no reason why you can't unit test embedded code. Applying general software engineering practices to embedded code (effectively) is a good way to differentiate yourself.


Thanks for this Sergeant! My hunch was that embedded code was lacking some of the more refined software engineering principles like continuous integration. I'll continue to frame my cover letters around that and give the masters some more thought.


I'm looking for an internship in July-August. I've built a lot of model planes younger and what you're doing seems amazing! Is there any way we could talk further?


The best way to get into the system is by submitting your resume on the careers page. I'll give a heads-up to the recruiting team to look out for any submissions mentioning hacker news.


Thank you! I'll do it by this week end.


Zipline?


Given that the two markets he listed are the markets I know Zipline is in, I’m willing to bet yes.


Solving the world's trillion-dollar energy storage crisis. (multi-trillion, actually.) https://www.terramenthq.com/

About a year ago, I started spending more time researching about climate change. I learned how important energy storage will be to enable renewable energy to displace fossil fuels. The more I read, the more fascinated I became with the idea of building underground pumped hydro energy storage. I found a research paper from the U.S. DOE written in 1984 showing that the idea was perfectly feasible and affordable, but it seems that nearly everyone has forgotten about it since. (they didn't build it at the time because the demand wasn't there yet. Now energy storage demand is growing exponentially)

A year later, I'm applying for grant funding to get it built. I know that nearly everyone will tell me I can't do it because this or that reason. Because people don't like change and they're scared of big things even if the research shows it makes perfect sense. But I'm doing it anyways because no one else is getting it done. The idea is too compelling and too important to ignore. So here goes nothing!


You are working on the most urgent and important topic that I know of. But it is also very hard to pull through. I wish you the best success.

Here are two recent startups in the field with multi-million funding. They were serious approaches. Many people involved with good planning, etc. They still fizzled out when it came to installing their first capacity.

https://www.power-technology.com/news/newsgaelectric-receive...

https://www.greentechmedia.com/articles/read/lightsail-energ...

I believe a reason for the funding problems is the high uncertainty for the economics of storage. Electrical energy is traded in a market. And your trading strategy in the market has a big impact on whether you earn money. Without solid numbers for energy storage and the expected trading outcome, investors will have a hard time.


Yeah if you drill into how the Tesla grid storage solutions make money, it’s not just about the storage capacity but also about being able to respond to demand or frequency issues extremely quickly, which LiIon batteries are very good at. There’s a lot of money available to the fastest dispatcher.


A lot of money seems excessive, at least in Europe. The flexibility market profitability depends a lot on the national market it plays in, as for instance the prices are quite low in Germany/north Europe, but very high in Australia.

Rather than a way of making money, I like to think about flexibility and fast dispatching as an enabler for way more renewables to come online, and that is crucial for the human race right now.


Makes sense -- the grid storage system I read about was indeed the one they built for the Australian wind farm.

I think there's a big gap for both types of storage - fast dispatching for intraday demand variations, replacing gas peakers, and more static storage as in the OP for multi-day gaps in renewable production such as periods of high pressure during winter when wind speeds and irradiance are both very low.

Can't wait to see how this market develops.


One of my friends is a physicist that had the plan to install a closed cistern under his house and heat it up in summer with mostly solar energy. Looked really promising and he had all the numbers figured out to supply a family home with heating and warm water through a normal winter. Didn't do it in the end because there were some problems with building a cistern on his property and the large up front investment necessary.

I wish you luck with your endeavor. I think you are correct that the problem of energy storage is the most important one to solve to allow renewables to really take off for general power supply.

I also believe that the efficiency of storage is secondary to a degree as renewables can supply an enormous amount of energy. So loss from pumps or energy conversions are nearly insignificant. At least at the moment where energy storage is in such a bad shape. Certainly a lot more promising than having batteries everywhere.


Funny to think the Romans were doing this thousands of years ago (and probably civilizations before them as well), and here we are just now coming around again to the idea of heating/cooling via cistern storage.


True, but current techniques for thermal isolation could really make it quite efficient. The optimal cistern would be a giant ball of water under your house.

I don't have his spreadsheet anymore, but it seemed really solid.


There's a large multi-tower project in Toronto that includes a heating and cooling system built by Enwave that incorporates a very large cistern / well, building on their prior success cooling the downtown core: https://www.cbc.ca/news/business/climate-heat-cooling-1.5437...


Kinda surprised to hear that date, as the UK has a pumped hydro plant[1] which started construction in 1974 and opened in 1984; Tom Scott has a YouTube video about it[2]. It's not all underground, but the machinery is built into a hill and it pumps from a reservoir at the bottom up to the top when electricity is cheap, and is used as a fast-startup generator when demand is higher. Is it really more feasible to build the upper reservoir underground, than on top of, somewhere? Surely "on top of" is higher, easier to get to, easier to flatten, and much cheaper?

[1] https://en.wikipedia.org/wiki/Dinorwig_Power_Station

[2] https://www.youtube.com/watch?v=6Jx_bJgIFhI


The whole problem is there are entire countries that don’t have hills suitable for pumped hydro reservoirs - the Netherlands for example, which has a high energy demand per capita


So the plan is to build an underground reservoir, then dig deeper and build the lower reservoir? At that point, wouldn't it be easier to use the ocean as the upper reservoir, and dig down to build the lower reservoir, and pump seawater around?

How much water do you have to move to power a house, anyway? It must be a lot - a truck pulling a tank of water goes up a hill as part of a journey without worrying about running out of gas, and they could be carrying 40,000 litres or 40 tons of water. Presumably there is no way you could move enough water at home to make a hydro plant - pump it up at night with cheap electricity and run it down at peak time to save money?


The Netherlands is caught in an endless war against the Sea. They will make her give them the energy or die trying.


I live near Dinorwig pumped storage hydro and can highly recommend the tours they do if you're in the area. The tour bus drives into the mountain and stops literally a few metres from the generating turbines where you can get out and take photos of the huge underground turbine hall.


I've actually been thinking about the issue of energy storage a lot recently -- I've read a ton about how lithium-ion battery production is exploding (usually not literally), but it seems unreasonable to store enormous amounts of renewable energy in a device that itself has to be replaced after a certain number of cycles. The device used for pumped energy storage -- a tank (to simplify greatly) -- basically never needs to be replaced.

It's really cool to see a feasible alternative to batteries. I think climate change is the single most important problem anyone can be working on right now -- amazing that you've found such a massive lever to pull on this issue.


> it seems unreasonable to store enormous amounts of renewable energy in a device that itself has to be replaced after a certain number of cycles. The device used for pumped energy storage -- a tank (to simplify greatly) -- basically never needs to be replaced.

It's a fair point but I think that you oversimplify the pumped hydro case. Pumped hydro also has quite some electronic components (turbines/motors), electronics, and other moving parts (valves, overflows). I can imagine some of these components require semi-regular maintenance (hence the maintenance shaft in the diagram on the website of OP).

You'd really have to run the numbers to see which costs more to maintain in the long term.


That's a good point. I think it seems likely that given the relatively larger storage capacity per unit[0] with pumped hydro vs. batteries, that the overall maintenance costs -- including the environmental costs of materials needed -- would be lower, but you're absolutely right that we'd have to do the math to know for sure.

[0]When I say "per unit," I just mean that a huge battery is made up of many cells that will need to be replaced individually, whereas large pumped hydro facilities are still only a small number of total reservoirs.


There IS the cost of maintaining pumps systems too, which isn't terrible, but it's not zero either.


Thanks! Yes, and our research agrees that the short-life of Li-ion is a problem, and it's one of the reasons why we believe our solution has so much more promise than Li-ion for grid scale storage.

The pump/turbine technology we use is the same that's been used for a hundred plus years for traditional pumped hydro dams, and the maintenance cost is very low. The life of a project is 40+ years. And in reality it can be 100 years with relatively low amount of maintenance. The San Diego County research posted on our website has good figures on this. thanks!


Hydro has more loss, which is worrisome. Maybe 5X the loss per cycle than battery?


Hmm, I think what you're alluding to is that pumped hydro is about 70-85% efficient and Li-ion is sometimes quoted at 100% efficient (in theory). But here are some more details.

In reality, when Li-ion batteries are installed in a large system, I believe the round-trip efficiency is quoted much closer to PHS. Sorry, i can't find the best research to cite right now, but here are a couple sources i found with a quick search.

"lithium-based ESS rated for two hours at rated power will have an AC round-trip efficiency of 75 to 85%." https://www.windpowerengineering.com/how-three-battery-types...

https://www.sciencedirect.com/science/article/abs/pii/S03062... "Conversion round-trip efficiency is in the range of 70–80%"

This one says 90-95% https://researchinterfaces.com/lithium-ion-batteries-grid-en...

I've heard that Li-ion installations can get up to 90-95% round trip, which is fantastic, and better than PHS for sure. But it's not the most important detail in the equation. Here's why:

One thing to remember is that power is lost all over the system in conversion and transmission. So raw efficiency can be less important than getting the right capacity to the right place on the grid. And that brings us to cost.

Even though PHS is a little less efficient than Li-ion, 85% for PHS is still really good. (see other my other comment below about 70% vs 85%) And the math shows that investing in PHS is simply cheaper -- even after assuming that Li-ion will drop in price by 3x in the coming decades. This is partly because Li-ion has a much shorter life span and needs to be replaced about every decade.

Li-ion is still great and super important! But it's not looking like the best contender right now for grid-scale storage.


Interesting, I didn't know that. Are there practical ways to decrease the amount of loss? Do you know what the actual loss percentages are for each?


I think the point you just made about the rate of replacement of lithium-ion batteries is very pertinent to solar panels(to the point that many consider solar a scam)


I've never heard that said before, but I'm interested to hear more. Do you have any sources on that? My understanding was that solar panels last quite a long time.


I hope to convey this in the least volatile manner, but I must bring it up.

> I learned how important energy storage will be to enable renewable energy to displace fossil fuels.

The above is a reasonable statement, however, your website says the following:

> We can’t quit carbon without energy storage. To stop climate change, renewables must replace fossil fuels.

> Without energy storage, renewables will fail to reach even 25% of the energy market by 2040. This will cause global temperatures to rise over 3°C, a level which will cause catastrophic climate damage.

Those are not only misleading but outright lies. Now, I won't hide my bias here: I work on nuclear fission. But here's the reality: there are many possible pathways to net-zero carbon and limiting global temperature rise to well below 3°C (below 1.5°C in fact)

To just list a few:

* Massive adoption of nuclear fission alone

* Development & massive adoption of nuclear fusion alone

* Shift from coal&oil to natural gas, cleaner fossil fuels + scaling carbon capture/sequestration

* Shift from fossil fuels to renewables + storage (probably not alone)

Or any combination of those, in addition to a number of alternative approaches.

---

Edit: Also, it should be noted that the energy sector alone only represents about 1/5 of the emissions problem. In order to get to net-zero GHG and stop anthroprogenic climate change, the clean energy sector needs to expand well past the current global TPES and supply net-zero electricity that allows for the decarbonization of the other main contributors:

Agriculture, steel+cement+plastic, transportation, buildings&appliances, and flora loss leading to lost carbon stores (deforestation etc)

Even if renewables and storage could supply 100% of our electricity or even total power supply, you would still only be 1/5 done solving climate change. There is no unitary solution.

---

Acting as though renewables are necessary, instead of one of multiple options, is denial or malicious. In reality, renewable energy is nowhere near capable of reliably and safely taking on a large portion of our energy supply globally. It is expensive (you can make claims about unit cost, but what really matters is country-scale - look at German electricity prices vs. just about everywhere else), it is dangerous, it takes a lot of land area, and it is the least reliable.

I don't want to spend a lot of time here stomping on renewables, but there is plenty of reason to, and my main point is that I feel it is unjust and immoral for you to claim that renewables "must replace fossil fuels" if we are to stop climate change. It's just not true, and you need to admit that.

The energy industry is arguably one of if not the most important backbone of our modern society, and responsible for the safety and health of billions of people. Whether you're working on the generation or storage side, it is all our responsibility to be honest and make true claims - not to spread biased misinformation when it benefits your particular solution.

I'd like to finish by making it clear I'm very happy you're working on your tech and I hope you succeed in making it the best it can be - renewables are certainly trending to higher adoption and we need reliable, efficient, scalable storage solution in order to avoid dangerous outages and other grid issues.

You bring up valid criticisms of existing solutions, although I do think you should also be fair to those. Most things in life are a trade-off: maybe pumped hydro is a better majority solution for the grid, but lithium ion is an incredibly important, successful and expanding technology that needs to be given credit for its wide range of great applications.

I hope this response has not been inflammatory: I just very much care about maintaining a truthful public discussion around energy. I wish you the best of luck, and I hope you can take something useful from this.


Thanks for expressing this johnmorrison and for being very uninflammatory about it :) Here is my white paper that sites ample research. https://www.terramenthq.com/underground-pumped-hydroelectric...

If you want to send me research supporting some of your thoughts here then I'd love to see it. I do know for example that it's a very valid debate whether or not nuclear has a place in our climate fight.

I'll try to re-work the language in my materials to make sure I'm not excluding other valid viewpoints. Thanks!


> Without energy storage, renewables will fail

but none of the below are renewables

* Massive adoption of nuclear fission alone * Development & massive adoption of nuclear fusion alone * Shift from coal&oil to natural gas, cleaner fossil fuels + scaling carbon capture/sequestration


You're clipping the wrong part of the sentence.

Parent is primarily disputing: "To stop climate change, renewables must replace fossil fuels." and if renewables fail, "this will cause global temperatures to rise over 3°C"


> only represents about 1/5 of the emissions problem

I wonder what the percentage would be like if the energy sector needs to provide enough energy to replace all fossil fuels. It's certainly much higher than 20%.


Yup. That's why we need to exceed the TPES with clean energy. We also need to significantly expand TPES if we are going to eliminate most of the remaining poverty in the world, to improve mean QoL.

I'm hoping fission can scale to about 2 EWh annually in the next several decades. Should be noted this is quite aggressive scaling. 500 PWh is more than enough to reach net-zero emissions.


I am for research in fission, but it is expensive in deployment and it needs to solve the problem with nuclear waste in my opinion. I also think supply of nuclear fuel can be an issue, although there are some concepts for other types than uranium.

> look at German electricity prices

True, pretty expensive. But they also include capital for investment in energy infrastructure. Such as building lines to get power from the north (high production) to the south (high consumption). The implementation tends to slow, but there are other reasons for that.

Another example is Norway that uses 98% hydro power. Sure, they have topological advantages not available everywhere. But technologies like this could open up more possibilities.

So fission can be utilized, but I doubt that Germany closing plants was a terrible decision.


> I also think supply of nuclear fuel can be an issue, although there are some concepts for other types than uranium.

There are 3 fission fuels occurring in nature: Th232, U235, U238

Actually, our reserves of Uranium are greater (by energy available to generate) than all of our Coal, Oil and Natural Gas reserves combined.

Our Thorium reserves are even greater than those.

In fact, Thorium is extracted as a byproduct of Rare Earth Metal extraction, and so we currently mine enough Th232 per year to replace the entire global energy and fuel industry even though there is no demand for Th232 extraction. Kind of mind blowing.

---

> [fission] it is expensive in deployment

I don't see where this idea comes from - in real life, regions which are powered by more fission have significantly cheaper electricity than those who are powered by less.

---

> the problem with nuclear waste

I genuinely don't think there is a problem with nuclear waste, and that this concern is a myth / misunderstanding based on a mix of fear-mongering via conflation with nuclear weapons and a lack of comparison.

Consider the following: all energy sources have waste products - nothing is 100% efficient.

Fossil fuels pump literally billions of tonnes of toxic gas into the air as their waste product. It moves around, we can't store it, and it is responsible for the deaths of millions of people each year through air pollution.

Renewables production has the same issue (although different gases), and also tends to pollute the water and local environment with other toxic chemicals and metals.

Nuclear fission produces the most dense, least amount of waste of any source, which is solid and easy to manage. We know where quite literally all of it is, and it doesn't hurt anybody or negatively affect the environment in any way as long as you keep it store somewhere.

As far as I'm concerned, nuclear energy does not have a waste problem, it has a waste solution. Global warming is the problem with energy waste, more specifically it is the problem with hydrocarbon waste.

---

> Another example is Norway that uses 98% hydro power. Sure, they have topological advantages not available everywhere. But technologies like this could open up more possibilities.

Agree with you. Renewables tend to vary in effectiveness based on location - in those locations which are well-suited for them, I think they should be used! Though I'm not sure what you mean by "could open up more possibilities" - we've had hydro power for thousands of years.

---

> I doubt that Germany closing plants was a terrible decision.

Note the following excerpt from Mike Shellenberger on Twitter:

  Germany’s renewables experiment is over. 

  By 2025 it will have spent $580B to make
  electricity nearly 2x more expensive & 10x
  more carbon-intensive than France’s. 

  The reason renewables can’t power modern
  civilization is because they were never
  meant to.

  A major new study of Germany's nuclear
  phase-out finds

  - it imposes "an annual cost of
    roughly $12B/year"

  - "over 70% of the cost is from the 1,100
    excess deaths/year from air pollution
    from coal plants operating in place of
    the shutdown nuclear plants"


I like to use current numbers, because extrapolating development is often pretty close to lying.

And Germany has much to do for carbon efficiency, but for total emissions it is somewhere in the middle.

https://file.scirp.org/pdf/ME20120500016_67195744.pdf

Data is for overall efficiency, not power production.

And Shellenberger is a nuclear lobbyist for that matter and his statements should be scrutinized. I am not fully content with the decision to make such a cut for fission power generation, but all these numbers are conjecture.


> Shellenberger is a nuclear lobbyist

I think it is extremely foolish to make caricatures of people. Twenty years ago, Elon Musk was a software startup guy who had no idea about anything hardware - but that's only because nobody bothered to consider the full human behind the caricature.

Mike Shellenberger was an anti-nuclear activist for much of his early life and has always been (and is still) an environmentalist. Furthermore, he may be a lobbyist now (I'm not sure if you are right or wrong), but he ran for governor of California a few years ago. He has been very explicit in explaining his reasoning for shifting from anti-nuclear to pro-nuclear in multiple talks and articles.

Take a look at the full human, and your justification for scrutiny fades away. Everybody should be scrutinized to an extent, but he is not fundamentally a biased lobbyist with financial incentives.

> Germany has much to do for carbon efficiency, but for total emissions it is somewhere in the middle.

This is the problem, man. Germany has spent hundreds of billions of dollars on renewables and they still have high GHG emissions - all they have to show for their massive spending is a couple thousand extra deaths per year and higher electricity prices.

If you gave my company the same amount of money, we'd have the entire world to net zero emissions within two decades.

Goes to show the inefficiency of government funded programs, and the awful incompatibility of renewable energy with a reliable, affordable consumer electricity market.

> I like to use current numbers, because extrapolating development is often pretty close to lying.

We can use current and past numbers: for its entire existence, nuclear fission has been the (a) safest, (b) highest fuel density, (c) least waste-producing, (d) lowest emissions, (e) most reliable mass energy source humanity has ever had.

The new generation of reactors will only improve this divide between fission and everything else. If you are against extrapolating development and want to rely on established numbers, you must conclude [fission > renewables]

I know I'm biased, but I'm also right about all those superlatives.


Just to make a note: my energy bill here in Germany was always high! Srsly before Germany did a lot for renewable it was high.


How large is a typical system - how much land you need to excavate?


If we attach our installation to an existing reservoir, we'll take up nearly zero land above ground. If we build a new self-contained upper reservoir it will be about 0.5 miles on each side and 40 feet deep. It can be built with material excavated from the lower reservoir. This may seem large, but it's for a huge amount of storage 20GWh - enough to balance the load of a large city. And keep in mind that it's about the same size of the many large reservoirs that are scattered around a large city.

Again, the most promising option would be to simply attach our installation to an existing reservoir. We don't use any additional water, we just borrow it. For an ample sized reservoir, each cycle would just raise and lower the water level by an inch or so. Another promising option is that we can even use the ocean as an upper reservoir. Salt water can be accommodated -- See our notes about the Okinawa Yanbaru Station.

There are more details in our white paper posted on the website.


Why would a new upper reservoir need to be so wide and shallow, rather than having much less surface area and being much deeper?


Good question, it doesn't really need to be, those numbers are partly just to visualize it. But we do have some reasons to keep it with more surface area: - less digging - less reinforcing needed - it's more stable - In some cases we're interested in floating solar on top of the reservoir which wouldn't work well if the reservoir was too deep.

But it's certainly not out of the question to go deeper instead.


This would correspond to a height difference of about 1 km between upper and lower reservoir right?


yup! More fun facts about a 1 km head height... Off-the-shelf turbines are actually spec'd for a max head height of something closer to .5 km. So the design calls for a double-drop. This design approach is taken from the DOE research linked on our website.


so where the geology allows it, why not go even deeper with the lowest reservoir, and put multiple turbines in series, with perhaps small reservoirs each .5 km?

Then the total energy capacity is V * rho * g * h, so that energy store is proportional to height, while tunnel boring price is roughly constant as long as tunnel boring volume of the reservoirs is much larger than the volume of vertical shafts.

I realize its a bit oversimplified but if we consider 2 prices: p1 price per volume for boring horizontally (for reservoirs), and p2 price per volume for boring vertically, then increasing the reservoir size by a volume delta V, requires boring 2 * delta V (upper and lower reservoir), while boring vertically the difference in height depends on the diameter...


Pumping water up a mountain to store energy has been used around the world with much success, in my opinion it seems to be the most realistic way to store energy efficiently.

If you can remove the mountain you could scale this out to every one in the world and single handedly solve this problem.

There will be other problems to overcome but someone will figure it out why not you? I wish you all the best in this very important effort.


That's about 90%+ efficiency pumping it up, and generating it down (electrical conversion). So maybe 85% total storage efficiency. How's that compare to battery or other storage systems? Lithium is 98% by some sources.


the UPHS seems like a durable system that could store and deliver many more cycles than a battery?


Yeah total cost of ownership would tend to even things out.


If you are looking for an informed academic perspective on energy trading and renewables I can recommend to contact the following researchers [1]. Just write them an email and explain what you are working on, I could imagine they are going to be interested!

1: http://www.is3.uni-koeln.de/


> affordable

affordable, but efficiency is so-so. 70-80% according to Wikipedia [1]

[1] https://en.wikipedia.org/wiki/Pumped-storage_hydroelectricit...


I love this quote: Hydro pumped storage is “astoundingly efficient…In this future world where we want renewables to get 20%, 30%, or 50% of our electricity generation, you need pumped hydro storage. It’s an incredible opportunity.” – U.S. Energy Secretary Dr. Steven Chu in 2009. Still true today.

And actually, we think that 80-85% round trip is more accurate for our projects because we'll use the latest/greatest tech (variable-speed reversible francis style pump/generator turbines). I think the 70% from these figures is citing older projects with pump/turbines that were not quite as efficient.


It does not matter. The alternative is curtailing wind parks/solar generation, and wasting clean energy and even more money.


> affordable, but efficiency is so-so. 70-80% according to Wikipedia

Say what? So-so? 70-80% efficiency sounds pretty damn amazing!


depends on the alternatives, which at the moment are indeed considerably worst


Do you have a link to the 1984 paper?

How can I help?


The link to the 1984 paper (from their website): https://www.osti.gov/biblio/6517343


Thanks to the commenter below who posted the link to the paper. Yes please! If you want to email me I'd love to see how we could work together. eric at terramenthq.com or syllablehq.com


Alerting people when proposals are put before municipal councils to develop natural land. I found out too late that a huge, beautiful forest where I live is going to be ripped up and turned into investment condos. So in the interest of giving natural land a fighting chance, I'm setting up a system that will notify users when an address they've submitted is being rezoned.

The challenge is obviously scaling, since every municipality is different. For now it's going to cover my region and we'll see from there.


Sounds challenging..and agree that adapting this to each municipal area would suck big time of effort.

I tried something similar but mostly to figure where the land is being purchased recently in a region. But then land/parcels/addresses system is all over the place and, even that info is not consistent across cities.

have you looked at data providers who may have this data?


Agreed, the differences between municipalities makes this really hard to scale. If data providers like the ones you mention don't exist, the two ideas that immediately come to mind are a) becoming that data provider (obviously), or b) building a platform for municipalities to store their land ownership data on.

Both sound like interesting problems, and it would be awesome if municipal-level land data was available at scale.


Exactly —- while the alert system is interesting and does have value, if they are putting in the tough, long, and grindingly harsh effort to compile these disparate data sources, that itself is the moat and becomes the product. Definitely worth doing!


How do you become that data provider? You need some scalable way to get all that data, right?


I'm not sure how to do it scalably, other than by becoming the host for that data, which is why I included my second option. It seems much easier (and much more profitable) than figuring out how to access the data in its existing format.


Ultimately someone has to do the hard to scale 'last mile' dirty work I suppose.


Do you know of or recommend land trust organizations that collect money from donors to simply buy this type of land to protect it?


To suggest another axis you could expand along, there is a broader issue with notifications about planning. You could have a system that covered all things and you could ask it to notify you about applications involving:

"Forests within 100 miles"

"High rises within 10 miles"

"Anything within 0.5 miles"


What's the format of this going to look like? If it's closer to open sourced, I'm sure some people would spend a weekend getting it up and running in their area if the infrastructure is built.


I'm hoping they will!! For most municipalities, it should be very easy... just submit a url and the site will pull the markup and search it for an address string. More complicated municipal setups, or municipalities with actual data feeds, will be tougher.


That's a big challenge, kudos!

In my one county alone there are 90+ municipalities, each with it's own Planning and Zoning Commission, and most with their own (varying level of) website. I'd say 5-10% don't have a website either.

In your situation, how are you getting data for when land is up for sale/zoning etc.?


That's a really worthwhile problem to be working on. Kudos to you. I'd be gutted if something like that happened in my area, although I'm lucky enough to live in an area that's mostly trees.


I can't wait to use nimbyism as a service


We're trying to improve the security of the Internet by replacing Certificate Authorities with a distributed root of trust.

DNS is currently centralized and controlled by a few organizations at the top of the hierarchy (namely ICANN) and easily censored by governments. Trust in HTTPS is delegated by CAs, but security follows a one of many model, where only one CA out of thousands needs to be compromised in order for your traffic to be compromised.

We're building on top of a new protocol (https://handshake.org, launching in 7 days!!) to create an alternate root zone that's distributed. Developers can register their own TLDs and truly own them by controlling their private keys. In addition, they can pin TLSA certs to their TLD so that CAs aren't needed anymore.

I wrote a more in-depth blog post here: https://www.namebase.io/blog/meet-handshake-decentralizing-d...


This is really interesting. Are you using concepts from self-sovereign identity¹²? Do you think there is a relevant intersection?

¹ http://www.lifewithalacrity.com/2016/04/the-path-to-self-sov... ² https://w3c-ccg.github.io/did-primer/


Yes! It's funny you mention that I just bought The Sovereign Individual — haven't read it yet but from cursory glance I think there is a lot of intersection. Would love to discuss more — we have a discord I can invite you to if you're interested, just ping me at the email in my profile.


All blockchains use self sovereign identity. They just don't use that buzz word.


This is super exciting and definitely one of the problem in foundations of internet. Happy to help in any possible way!


@chinmays awesome can you join our discord? Let's discuss there we just launched today!! https://discord.gg/9r9wUrq


Do you have any plans to address TLD squatting?


Handshake has built-in mechanisms to prevent squatting. All TLDs are won through an open vickrey auction (if you win you pay the second-highest bid for the TLD). This prevents squatters from being able to easily buy up all the good names at once.

There is an issue though — the auction system gives early advantage in buying names for cheap. If only 100 people are buying names on day 1, they’ll be able to buy a lot of the names without competition. Handshake has a mechanism to prevent this. Names are released for bidding over the first year so that people who learn about it six months late can still register good names. The release schedule is basically a hash(name) % 52 to determine which week you can start registering any name.


I'm growing the freshest lettuce, iron-rich kale, and a lot of other leafy greens!

While in college (CS & Math), I got heavily interested in growing food in the most efficient and healthiest way possible. I was a dreamer when I started so I thought more of how to grow 'earthly' produce on Mars, but then I realized that my own planet Earth is so massively underserved.

It's basically like this- I mastered growing leafy greens in indoor closed environmenet, then I tried to cover all the major physical and biological markers, then I try to optimize the most optimal levels of 5-6 variables (currently) that I can fully control and may produce the best phenotype- CO2, O2, Light, Nitrate, P, K. These parameters have their own sub definitions.

So far I have had great results. I am trying to raise investment so I can finally make it a reality. Check the numbers here: hexafarms.com (no fluff)


> THE FINAL PRODUCE IS THE ULTIMATE MANIFESTATION OF THEIR PLATONIC IDEAL FORMS

How's the taste?

Not denying it's possible to grow food very efficiently indoors but a vastly oversimplified opinion is that plants need sunlight to be tasty. Is this wrong?


You'll have to buy my words- but taste wise (based on my surveys too) it's the 'best' they have had (mostly city dwellers I'm talking about).

Yes, you don't really need sunlight whatsoever. I was myself shocked until I recalled high school biology concept of genotype and phenotype i.e. the genetic structure that manifests itself given the right physical conditions (at least of plants.) As for the plants' nutrients, here's a classic- Teaming with Nutrients: The Organic Gardener's Guide to Optimizing Plant Nutrition, by Lowenfels. I was amazed to find how complex, yet simple plants are.


You should check out "The Real Martian" https://www.youtube.com/channel/UCd8t8Dq8oZeAjGDx_87azBw/abo...

and Beanstalk (a YC company) https://www.beanstalk.farm/


Funny story: I was rejected by YC last batch. But I get it- I thought they look for traction and what not, so I rather made the pitch video on a very specific aspect of Hexafarms- which is monitoring, since some people were willing to check it out. No doubt YC would reject it. On the other hand, Thiel Foundation reached out to me, but they had some drop out thing and what not which I was not able to fullfil (and after a while they also stopped reaching out too).

Thanks for the references.


The Real Martian is great. Go back and watch the Hab 1 videos, it was truly sad to see snow collapse it. he's now quit his full time job and joined some startup and they're about to go full steam ahead into Hab 2 after months of modelling and other stuff. Ultimately they're looking at creating a commercial product where a family, or a couple of families, can erect a habitat and grow a decent amount of their food.

Hab 1 had aquaponics and fish, not sure where Hab 2 is going to look like as they haven't shared much but he's just started churning videos out again the past month or two.

It's a really neat project, I just hope he continues to show as much as he did with Hab 1 now that he's part of a startup.


Hey Dave,

Is it possible to setup a 'microfarm', similar to a window fridge appliance, in a part of an apartment room?

I'm ok with some manual work every 2 days, such as filling in a water container.

Besides water & substances, how much electricity would this use to grow a generation of leafy greens, per kg of produce?

Thanks for working on this!


Amazing! It would be awesome if people living far from traditional agricultural areas could access fresh greens without insane transportation costs (both financial and environmental).

Are your farming systems fully automated? If so, has that been more of a software challenge, or more of a mechatronics challenge?


>It would be awesome if people living far from traditional agricultural areas could access fresh greens without insane transportation costs (both financial and environmental) That's what actually got me started. A head of lettuce on average 1200 miles (https://ucanr.edu/datastoreFiles/608-319.pdf) and it is so disconnected from the site of consumption.

My vision is to have distributed farms (as opposed to conventional wisdom, i have found that smaller indoor farms will be more profitable) every eight blocks or so.

Not really- It's quite manual (as of now). I had to change my country almost three times since I started; so I'm rather focussing more on data, and training algorithms part to figure out the right parameters (and the farm is a just a testbed). One example would be to have a $5 camera for measuring growth than buying a $100 3D what not camera.


> mastered growing leafy greens in indoor closed environment

I love this! Makes me happy to see someone's working on such an interesting problem that would benefit many.

For feedback, I believe using photographs of the leafy greens would be effective in communicating your vision.


Thanks I'll do that.

I actually graduated from college this year; and for personal reasons I've had to change countries; now I'm in another Master's program... ready to drop out anytime. The whole project has been dead for months once in a while! I'm more trying to leverage ML for optimizing things. I guess that's what modern farming is missing (not ML per se, but optimization).

I'm trying to raise some investment (or in the worst case bootstrap and risk everything in the next few months), then I will go crazy with the idea.


This is awesome man, exactly what piqued my interest!


I'm working on pacing emails to a more manageable, calmer schedule. I'm doing it with essentially a UI-less system which is a rather fun way to produce an app. It simply requires a user to update their email of the website that emails them too frequently with a paced.email alias. E.g.

  johndoe.shopify@daily.paced.email
  johndoe.stripe@weekly.paced.email
  johndoe.github@monthly.paced.email
At the end of each period, a single email is sent to the real email address containing all of the messages the alias received over that timeframe.

https://www.paced.email

I'd love to hear how you'd use it.


That's a neat idea.

> At the end of each period, a single email is sent to the real email address containing all of the messages the alias received over that timeframe.

Why not send each received mail individually? If you aggregate them first, it makes it very difficult to reply to individual messages with standard email clients.


Good question lqet, thanks. I did create a version that added each message as an EML file to the email with links to each file too. Perhaps a cunning combination of the two variants might be the way forward. Appreciate the suggestion, it's a good one.


I second the "delay-then-send" approach. Don't bother with a digest. Just hold email until the scheduled time then release them. Might want to put suitable intervals or you might get zinged for spam or otherwise throttled. You've probably hit that already.

I use a similar but far less fancy approach with email filters: I have everything put into its own filtered folders then only check them on a schedule.

Your approach is good because the schedule is right there in the email address.


Thanks, rs23296008n1. I toyed with the idea of a send in one hit approach, but feel it will be counterproductive to having a calm inbox. Getting 5, 10, 50 emails in quick succession would certainly raise my stress levels. Perhaps I can offer two or three digest variations... 1) all in one as it is now (plus eml) 2) burst.

Food for thought.


Maybe have a dispatch interval. The weekly-on-tuesday emails get sent one every 5 minutes starting at some time.


Very good. I’m noting all these suggestions down. The app only launched a few days ago. I wanted to make sure it was a valid product first before doing too much to it. I’ll gradually add more functionality and examples over time.


Sounds like you have your mvp and have an incremental plan going forward. Good. The thing is out in the world - that is something a lot of folks don't do.


I'd imagine this is most useful for things that send frequent read-only emails. Someone personal that you would want to reply to they would presumably get your normal address


I haven't used this. But i see the utility. Wouldn't having an admin UI to map ids to periodicity be better than using a hard coded subdomain? That way one can prevent bad actors from switching pace when they come to know of this site. I could also up or lower pace for an id while not having to go through the hassle of changing my mail id.. Also, doing that would let you sell the solution for use with custom domains.

I mean use github@johndoe.paced.email And have an admin ui that lets you set "github@johndoe.paced.email" =>"weekly"


These are some great suggestions. I'm starting to think about how I could use custom domains etc. I need to figure out the next steps for the app and what people would be prepared to pay for such a tool. Ideally, I'd like to keep everything simple when it comes to pricing and not have functionality based tiers. Not sure yet.


I like how this is done, I'd suggest forwarding to another existing email address, for example: johndoe_AT_gmail.com@weekly.paced.email

Then you don't even need a website.


I think there's a balancing act between making it memorable enough and simple enough. Great suggestion though, noted! Hacker News is incredible. A spectacular hive mind for mulling potential ideas over.


Great idea. My suggestion: why not use the Gmail-style johndoe+spotify@ suffix? Just because people would be more used to it. That way johndoe@ also would work.


An irritatingly large portion of websites don't let you put + in email addresses.


I ran into an issue where using the + notation required me to create a whole new account on airbnb because I had forgotten that I used + in my original email.


Thanks thewarpaint. Good point. Having read the below counterpoints though, I'm not quite sure now! I'll look into it.


On second thought they have a point, I have never had an email address with a dot being rejected, but I've seen it for the plus alias several times.


I'm tackling the issue of managing Reddit saves.

Across all platforms (not just Reddit), people including myself like to save/bookmark interesting content in the hopes of getting some use out of it later. The problem arises when you start accumulating too much content and forget to ever check that stuff out.

I'm working on a solution to help resurface Redditors' saved things using personalized newsletters. I'm calling it Unearth and users get to choose how frequently they want to receive their newsletter (daily, weekly, or monthly). The emails contain five of their saved comments or things and link directly to Reddit so that when viewing it, they can then decide whether or not to unsave it.

Basic functionality is all there, just needs some more styling and the landing page could be spruced up.

https://www.tryunearth.com/


Signed up, and I love how fast it was to create an account. Literally two clicks and 5 seconds as my password is saved in google crome and you sign up through reddit. I think you're on to something with that onboarding process.

Kinda different, kind of the same but i'd love to use an app with much better search than the 'direct search' currently in most aggrogrator/ note apps. If i searched 'quotes' it would rip out and return all the things in italized, in quotes, or things that the algorithm deems as quotes based on it's scrape of the internet; Kinda like google but 'personal search' based on my notes, articles, all my different emails (work, and my 37 different gmail accounts) and websites I frequent (like reddit, hacker news comments, etc.) There was an HN article the other day that got me thinking about this problem, but i can't seem to find it. However, it approached it from a much deeper technical level, utilizing emacs and searching through his code. If you could bring that into an easy to use, consumer facing GUI I think it'd have potential to be pretty game changing.

'Personalized Search, and we don't have to steal your data because you willingly give it to us' - Google


I believe this is the HN article you're referring to: https://news.ycombinator.com/item?id=22160572


I tried to make onboarding as frictionless as possible so this makes me happy to hear!

And that's a really interesting idea regarding search. Would love to see the HN thread/article you mentioned to get a better understanding of the concept. As of now, Unearth's only focus was on active content resurfacing, but I've seen many Redditors mention the wish to search their saves as well so I think I'll look more into this.

Appreciate the ideas, keep them coming.


Curious if you've thought about this as a browser extension where it injects what you've saved into the main reddit feed. For example, one saved item per refresh. So you naturally rediscover and engage with items you've saved in the past, with a decent algorithm to help prevent any fatigue from seeing the same item too many times.


canada_dry also brought up the idea of a browser extension (for privacy's sake). I think that paired with your idea of inserting saved content into the main feed is very enticing.

I would need to figure out how injection would work for saved comments, do you have any ideas? I'm definitely going to save this idea so thank you!


awesome, not sure how I'd handle comments since this approach would aim to be as seamless as possible. Maybe when they click into the thread you use the UI to remind them of the other saved items they have using the right sidebar for example, but I don't like how it tries to grab attention from the core experience. They could also always click the browser extension similar to Pocket, but I imagine this action would be used less compared to things naturally appearing on the pages that they browse. You'll have to find ways to train the user to use that behavior regularly, perhaps again similar to pocket when they click "save" the browser extension shows a little popover so they see it saved in the extension, can tag it etc., and know their other content/saved comments live there


Wow, that's really neat! I sometimes hesitate to save something because I think "when would I really come back to this?" But this would probably get me to save more things that I find interesting.


Thanks! I've been hesitant to show it off thinking not many people would find it useful but you've given me hope :)

Feel free to try it out and let me know what you think or if you have any suggestions.


Playing devils advocate, I'd really prefer this kind of functionality as a separate browser add-on - i.e. unlinked to my reddit signon.

For privacy you needn't require the reddit ID of your users. Simply that they want to save something from reddit to their tryunearth.com account.


I appreciate you raising this concern, I honestly never thought about that.

> Simply that they want to save something from reddit to their tryunearth.com account

When you say that, I envision the extension overriding or extending Reddit's save button functionality by making an API call to the unearth backend. Is that kinda what you had in mind?


Exactly. Have an initial import functionality in onboarding, where the user could somehow import their currently saved content. Thereafter you could have an extension that implements a 'save to unearth' button cleanly into reddit's UI.


This is a great idea. Rediscoverability is a big problem, especially with the growing popularity of personal knowledge systems (Notion, Roam, etc), which have been discussed a lot on HN.

I take a ton of notes on Notion, but I worry that I'll never see most of them again. Maybe part of the value is just in writing the notes in the first place...?

Kudos for solving out this problem for Reddit!


Just a heads up: on mobile (Android Q) clicking on 'Get started using Reddit' gives me an 'No apps can perform this action' error from the OS. I have the reddit app installed, so most likely the link tries to open the app (instead of opening the link within the browser) and fails.


Thanks for the heads up, will debug and push a fix tomorrow.


> I'm calling it Unearth...

Why not call it Digg?


hehe I see what you did there ;)


Awesome, this but for twitter likes + retweets (I don't use reddit enough)


This is a great idea. I need this for HN also.


I'm building an AI agent to help develop foreign language skills through realtime (spoken) conversations.

It's funny how we're all working from different definitions of the word "problem" - I'm certainly not changing the world with medical supplies for developing countries, renewable energy, payment systems and so on.

But it's something I'm really passionate about, and I'd be over the moon if I came anywhere close to the picture I have in my mind.

Back when I was studying German and Chinese, I would spend hours and hours on rote practice with little to show for it. My brain almost felt like it was on autopilot - the eyes would read the words and the hands would write the sentences, but the neurons weren't really firing. It didn't feel like I was properly building the synaptic bridges necessary to actually use those words in conversation.

On the flipside, after just 20 minutes speaking with a tutor, my proficiency would improve leaps and bounds. Being forced to map actual, real-world thoughts/concepts to the words/expressions I had learned - that's what made everything clicked. It felt like the difference between just reading a chapter in a maths textbook, and actually doing the exercises.

So after keeping track of progress in NLP and speech recognition/synthesis in recent years, it seemed like a logical time to start. Progress is slow/incremental, but it is there.


I think it’s a great idea. I first started learning Dutch with Michel Thomas audio course which is very much about being in a simulated small language class, and you need to say sentences when prompted by the “teacher”. Later in, I learned almost all the Dutch I needed to pass the citizenship language exam just by conversing with friends and family in Dutch, gradually building up fluency. Let me know if you need a beta tester, email is davedx@gmail.com


That would be fantastic, thanks. I'll jot your e-mail down and will reach out when I'm getting close to something testable.


I'm an English teacher. Sign me up too?


Sure! My email is in my profile, feel free to shoot me a note with your contact details.


1.) A solver for the unstructured Euler equations. ...I was intending to volunteer time for an local university project investigating parallels between Holographic light with orbital angular momentum and hydrodynamics (in this case the Euler/Madelung equations). Not sure what happened as... volunteers get lost in the shuffle? Anyway, the solver is fun.

2.) Porting my Python code for nonlinear gradient driven optimization of parametric surfaces to C++. Includes a constraint (propagation) solver based on Minikanren extended with interval arithmetic for continuous data (interval branch and contract). This piece is a pre-processor, narrowing the design space to only feasible hyper-boxes before feeding design parameter sets (points in design space) to the (real valued) solver. Also it does automatic differentiation of control points (i.e. B-spline control points) so I can write an energy functional for a smooth surface, with Lagrange multipliers for constraints (B-spline properties). Then I get the gradient and Hessian without extra programming. This makes for plug and play shape control. I am looking to extend this to subdivision surfaces and/or to work it towards mesh deformation with discrete differential geometry so I've been baking with those things in separate mini-projects.

3.) Starting the Coursera discrete optimization course. This should help with, e.g. knapsack problems on Leetcode, some structural optimization things at work, and also it seems the job market for optimization is focused on discrete/integer/combinatorial stuff presently so this may help in ways I do not foresee.

4.) C++ expression template to CUDA for physical simulation: I am periodically whittling away at this.


Would you be willing to explain what the applications of (2) are? I'll admit that I only undersetand a fraction of what you said in that section, but I'm curious what you're using it for.


Sure: the automated design-by-optimization of ship hull form geometry which meets constraints and is smooth according to some energy measures.

Build a functional to describe your ship problem, minimize it: if the solver is happy, you have a boat.... uh, or if you haven’t solved the entire problem, you have some geometry which can be stitched together with more optimization to make a boat.

More broadly, “why a boat?” Answer: because boats have a lot of constraints, and a lot of shape ( Gaussian curvature, non rectangular topology, a need to be cheaply produced, etc etc)

So it’s a good problem to tax your generative design or design space search/optimization capability.


Also, if there is a specific piece you’d like me to elaborate on, (I mean, beyond my sibling comment) I’m happy to do so!


I'm interested about your project #2. As you mentioned B-splines do you deal with trimmed surfaces? Would you have any reading recommendations for someone learning about surface optimisation?


Hey, thanks for your interest! I've avoided trimmed surfaces, in part because I'm interested in doing one or another kinds of analysis on or with the parametric geometry, and trimmed surfaces are not so easy to work with for some of the finer control I want from my optimization tools. (They often cause comparability issues with export between programs as well, but that becomes more important only if somebody uses your stuff ;)

I like other methods of getting local control, or finer shape control of surfaces. In my stuff I've used truncated hierarchical B-splines (THB-splines), which are great for adding detail, but useless for changing topology. People speak highly of (analysis suitable) t-splines but I say they are complicated and subdivision may be better overall now anyway. Generally speaking, I think the whole industry will have to go to subdivision. (Among friends I'd say it may carry right down to poly meshes via differential geometry but those two representations might play well together given the right tools)

Reading recommendations:

For everything you ever wanted to know about a B-spline, including a C++ library implementation from scratch, highly documented and explained: 1.) Piegl and Tiller "The NURBS Book" This includes a tiny bit of shape control via optimization.

For an explanation of the basics of B-spline nonlinear optimization with Lagrange multipliers, focuses on ships, there is a chapter here that takes you to the state of the art, circa 1995: 2.) Nowacki, et al., Computational Geometry for Ships

3.) Tony De-Rose's book "Wavelets for Computer Graphics" actually has some good scripts getting at the basics of wavelet B-splines and some facets of hierarchical parametric geometry.

The above is a start at form parameter design for B-splines. This was okay 20 years ago. It's still importatnt as a basis for understanding optimization of parameterized shape. ---Even subdivision surfaces have control points.

Generally B-splines were found not to be flexible enough for representing local details efficiently. Further, the optimization techniques still require a lot of manual setup to get things right...

The next steps are still in development: -subdivision surfaces are a way forward for shape representation. Generally they were more problematic for computing engineering quantities of interest, especially and precisely where they "go beyond" the B-spline to allow surfaces of greater flexibility -- that is where the analysis suitability breaks down to some extent. Again, this has been patched up in the last couple of decades but still change is slow to come to the engineering industry.

I think it's well worthwhile to look at geometric optimization in computer graphics as well. See The cal-tech multi-res group, Keenan Crane at CMU (geometry collective), and tons of siggraph papers where discrete differential geometry has been leveraged to do neat things with shape. (E.g. curvature flow: https://www.cs.cmu.edu/~kmcrane/Projects/ConformalWillmoreFl... I think there is newer work building off this and adding more complicated constraints but I can't remember off hand. As is they have some already!)

Back to the point: you wanted optimization readings. Well it's mostly in the literature, and the literature is mostly kind of vague when it comes to parametric optimization of B-spline. Though the high points are mentioned, the detail is often hardly much better than you find in Nowacki, 1995. To this end, I have some really specific entry level PDFs that might help, and the first part of my stuff is written up in this paper: https://www.sciencedirect.com/science/article/abs/pii/S01678... This deals mostly with curves, but has a direct extension to surfaces. Automatic differentiation really helps here! (I never published this bit on the extension to do surfaces directly (with all their attendant properties as potential constraints) as my professor said "direct surface optimization was to expensive". Looking at the discrete differential papers as of late, I tend to disagree. )


I keep coming back to bother you :). One of the newer tricks to make parameter fitting less expensive which has recently been developed is active subspaces. I thought you might be interested in playing around with it.

Most of the research is being done out at the Colorado school of mines by Paul Constantine. The basic idea is that you reduce your parameter space to the eigenvectors of the sensitivity matrix with the largest eigenvalues. Some of the work I have seen in constitutive modeling (and UQ) has effectively reduced parameters spaces of a couple hundred DOF to about 5-6.


Scanning through some literature, does this method require that the input space be equipped with a probability density function “quantifying the variability of the inputs”?

Seems like that would be the (or a function of the) thing we are after in sensitivity analysis.

On the other hand, it appears that I may be able to get away with some naive assumption about this quantity, compute eigenvectors and find the active subspace... and then vary the mode in these directions.

Is this for local or global optimization?

Part of my stuff was about finding a way to guarantee that a particular set of inputs results in a feasible design. (Edit: maybe active subspace could replace this... or exclude poor regions faster)

The other part (the gradient driven part) solves curves and surfaces for shape which conforms to constraints. We really need the fine details to match as the constraints are often of equality type.

From there, it seems this active subspace method could really help in searching the design space. (From what I read, this is the purpose) A more efficient method of response surface design. My stuff is agnostic about this.

Then again, surely it could be of used in more efficiently optimizing constrained curves and surfaces... I will keep thinking but it seems a secondary use at best, or would you agree?


We should move this conversation to email (as I will check that more frequently and will be more likely to get back to it). See email in my profile.

Active subspace comes from the uncertainty quantification community. If you assume all your parameters are Gaussian, then the sensitivity matrix is directly correlated to the probability density functions. I find it easier to think in terms of the sensitivity matrix, but useful to realize the sensitivity matrix to approximate (complex) probability distributions.

My though was that if you were optimizing have a huge parameter space theta = [theta1, ... thetam] then you could reduce the parameter space by only looking at theta_reduce = [theta | d loss/d theta > threshold] or you could look at active subspaces and change the parameter space to xi = [xi1, ... xi_m] where x_i = SUM a_j theta_j.

xi_i could be given by the largest eigenvectors of the sensitivity matrix S_ij = d^2 loss/dtheta_i dtheta_j

Wouldn't it be nice if hacker news supported latex.

I haven't done any work here, but I suspect I will be doing some of this towards the end of summer.


Hey this is cool! I did not see your comment until now. Let me take a look (as soon as I can) and I will see what I can come back to you with.

Yeah Colorado School of Mines! Small world, I am in the metro area. I've actually talked with a physics proff from there about helping with a project.


What library are you using for automatic differentiation. I am working on building code to optimize (and later build) high quality finite element meshes for structural analysis. For the initial proof of concept, I am simply doing finite differences, but would prefer to eventually add AD. I am unsure which packages are suitable (currently all numpy and scipy).


Both in the python version and so far in c++, I am using my own forward mode implementation in Numpy and Eigen, respectively. (Why? Well, it was easy, I wanted to learn, it’s been fast enough, and most critically, allowed me to extend it by using interval valued numbers underneath the AD variables) Here’s where I do something kind of funny In the AD implementation: Basically just write a class that overloads all the basic math ops with a structure containing the computations of the value, the gradient, and the hessian. The trick, if there is any, is to have the basic AD variables store gradient vectors with a “1” in a unique spot for each separate variable. (And a zero elsewhere). Hessians of these essential variables are zero matrices. Mathematical combinations of the AD variables automatically accrue the gradient and hessian of ...whatever the expression is. Lagrange multipliers are AD variables which extend the size of your gradient. Oh, and each “point” in, say 3D, is actually 3 variables so your space (and size of gradient) is 3N + number is constraints in size. Write a newton solver and you are off and running.

This would be pretty hefty (Expensive) for a mesh. I’ve used it successfully for splines where a smaller set of control points controls a surface. Mesh direct sounds expensive to me. I assume you looked at constrained mesh smoothers? (E.g. old stuff like transfinite interpolation, Laplacian smoothing, etc?). Maybe newer stuff in discrete differential geometry can extend some of those old capabilities? What is the state of the art? I have a general impression the field “went another way” but not sure what that way is.

As for the auto diff, I’ve also got a version that does reverse mode via expression trees, but the fwd mode has been fast enough so far and is very simple. Nice thing here is that overloading can be used to construct the expression tree.

Of course if you do only gradient optimization you may not need the hessian. It’s there for Newton’s method.


Thanks! I am pretty sure nobody does direct optimization on the mesh quality because it is hefty. I did come across a PhD thesis which was doing it for fluid structures interactions and his conclusion was it was inferior to other techniques. I have a few tricks which will hopefully make the problem more tractable.

I use FEMAP at my day job have found Laplacian smoothing and FEMAPs other built in tools have been wanting.

I am currently thinking that my goal is to try and use reinforcement learning to build high quality meshes. In order to do that you need a loss function and if you are building a loss function you might as well wrap an optimizer around it.


Huh, machine learning for high quality meshing sounds like a great idea! (RL sounds like turning this idea up to 11 — exciting stuff and best of luck!)

FEMAP Seems a hot topic these days. Some folks at my work are building an interface to it for meshing purposes.


Why don't you use Julia for #2?


For the re-write?

Simply for the experience. C++ is more in demand right now, as far as I can tell, sorry to say.


nothing to be ashamed of!


Creating the worlds best IP address and domain name APIs and data sets, at https://ipinfo.io and https://host.io.

We've solved scaling and reliability (we handle 20 billion API requests a month), and we're now focusing almost all our efforts on our data quality, and new data products (like VPN detection).

We're bootstrapped, profitable, and we've got some big customers (like apple, and t-mobile), and despite being around for almost 7 years we've still barely scratched the surface on the opportunity ahead of us.

If you think you could help we're hiring - shoot me a mail - ben@ipinfo.io


Why/how is this better than existing IP solutions (e.g. https://www.maxmind.com/en/home or https://www.digitalelement.com/)?


Here are some reasons why someone might choose to use us:

- We're super developer friendly - you don't even need an access token to make up to 1,000 requests per day. We have a clean / simple JSON response, and official SDKs for most popular libraries

- We have a quick, reliable API. We obsess over latency and availability, and handle over 20 billion API requests a month. (here's a technical overview of how we reduced rDNS lookups by 50x: https://blog.ipinfo.io/reducing-ipinfo-io-api-latency-50x-by...)

- We obsess over data quality. We have a data team that's constantly striving to make our data and accuracy even better than it already is.

- We're innovating. We've launched and are working on exciting new data sets and products in the IP and domain data space (VPN detection, the host.io domain API, and more).

- We care about our customers. We have people working on customer support and customer success. If you run into an issue or need help, we'll be there to answer your questions.


Thanks! Do you have a work email I can contact you on? We currently lookup >1 million IPs per second and are in the middle of evaluating IP-geo solutions.


Sure, would love to be part of your evaluation! ben@ipinfo.io


Is there a way to correct location data for an IP address? I have a static from my ISP and it’s almost never close to correct.


Yep, shoot me a mail :) Or see https://ipinfo.io/corrections


I'm a diplomat working on international norms for cyber and information warfare. I'm trying to get countries to agree on how to use and not use their capabilities, the influence on global democracy, the connection to armed conflict and the future of interstate relations. In practice, this means meeting a lot of people and spending a lot of time negotiating with other countries in scrappy conference rooms in the UN and elsewhere, sometimes in weird anonymous locations.

On the side, I'm an advisor to an impact investment foundation that is expanding their operations to East Africa. They're setting up an investment fund and accelerator programs to help companies tackle development challenges.

I'm also involved in a startup that is working to develop a new fintech app to create more data and access to credit for small-scale businesses in East Africa. It's a basic PWA app, not released yet, which has some real potential of scaling up and addressing some pretty substantial development challenges. (If anyone is really good with writing a bare-bones PWA based on Gatsby optimised for speed and low-bandwidth environments, please give me a shout).

I've had a weird career. Started out as a programmer in the late 90's, did my own startup in the mid 00's which was a half-baked success, moved to Africa for a few years and worked for the UN, moved back home and had kids, moved back to Africa and worked as a diplomat covering lots of conflicts in the Great Lakes region, moved back home again, worked for the impact foundation for a year and then rejoined diplomacy to do cyber work.


> I'm a diplomat working on international norms for cyber and information warfare.

I didn't know any such norms existed. What are some of the existing agreements, and if you can talk about it, what are some of the new ones you're trying to push forward?

Your career sounds crazy...in a good way! Was your initial involvement with the UN in a technical role?


There are several negotiations ongoing in various committees of the UN, where the issues that surface in the "real world" are negotiated: information warfare (such as election interference), responsibility for information across borders etc. https://www.cfr.org/blog/united-nations-doubles-its-workload...

Basically, it's about trying to defend international norms from an onslaught of attempts to make states the primary defender of the informational realm, and thereby legitimise opression.

Yeah, first job for UN was coding a shitty CRUD system in order to keep track of HIV infections in East Africa.


I’ve spent a long career in tech (20+ years and one day hope to fuse that with public service (both elected and foreign such as yours). Would you mind if I were to get in touch to inquire more based on your experiences?


sure thing, send me a dm!


I'm trying to build a programming language that might best be characterized as rust - ownership + GC + goroutines (coroutines with an automatic yield semantic).

My rationale for starting this project was that I like specific features or facilities of many individual languages, but I dislike those languages for a host of other reasons. Furthermore, I dislike those languages enough that I don't want to use them to build the projects I want to build.

I'm still at a relatively early point in the project, but it has been challenging so far. I'm implementing the compiler in Crystal, and I needed a PEG parser combinator library or a parser generator that targeted Crystal, but there wasn't a PEG parser in Crystal that supported left recursive grammar rules in a satisfactory way, so that was sub-project number 1. It took two years, I'm ashamed to say, but now I have a functioning PEG parser (with seemingly good support for left recursive grammar rules) in Crystal that I can use to implement the grammar for my language.

There is still a ton more to be done - see http://www.semanticdesigns.com/Products/DMS/LifeAfterParsing... for a list of remaining ToDos - but I'm optimistic I can make it work.


Maybe check out the https://vlang.io. It might be similar to what you are doing and personally I admire the ideas and decisions the author made so far.


I saw vlang.io a few months ago. Every time I come back to the site, my jaw hits the ground again. I am utterly impressed by Alexander's productivity - it blows me away every time I consider it.

I think V is an impressive language, but it isn't quite geared toward my vision of what a language ought to be.

I am more a Rubyist than a C, Rust, or Go developer, and so my preference is for a higher level language that's a little more pleasant to use and doesn't make me think about some details that I consider "irrelevant". I'm firmly in the "sufficiently smart compiler" camp, and think that I shouldn't have to think about those low level details that only matter for the sake of performance - the compiler ought to handle that for me.


Neat! I spend a lot of time working with and on parsers and parser generators.

Did you use Sérgio Medeiros' algorithm for left recursion, perchance?


No. I was pretty naive in my initial attempts. I tried for many months to make Laurence Tratt's Algorithm 2 (see https://tratt.net/laurie/research/pubs/html/tratt__direct_le... ) work, but ultimately I failed. I recall running into some problem with my implementation of Tratt's idea that led me to conclude that his Algorithm 2 doesn't work as stated. My reasoning is buried in a git commit message from many months ago - I'd have to go look it up.

My takeaway from Tratt's explanation was that the general technique of growing the seed bottom-up style when in left-recursion - I think I've also seen that idea termed "recursive ascent" somewhere else but I can't place it offhand - seemed reasonable, so that's what I kept working on until I figured out something that seemed to work.

Later on, I ran across https://github.com/PhilippeSigaud/Pegged/wiki/Left-Recursion, which describes Sergio Medeiros' technique at a high level. One of the nice things I used from the Pegged project was the unit test suite. I re-implemented some of the unit tests from Pegged in my own PEG implementation and discovered that it failed at those unit tests.

It took me another number of months to figure out why my implementation failed the unit tests. I re-jiggered my implementation to make it handle the scenarios captured by those unit tests, and then naively thought "hey, it works!"...

All was well until I ran across another set of unit tests in the Autumn PEG parser (see https://github.com/norswap/autumn). My implementation failed some of those as well. After another number of months, I had a fix for those too.

Long long long story short, this process continued until I couldn't find any more unit tests that my implementation would fail, so once again I'm at the point where I think "well, I think it works".

There have been a number of occasions where I've thought "if this doesn't work, I'm just going to re-implement Pegged in Crystal!". Perhaps that's what I should've done. Ha ha! In a few months, when I find another test case that breaks my implementation, I may just do that. We'll see. I hope it doesn't come to that. Fingers crossed. :-)


Sounds like a dialect of ML!


I've been meaning to improve "news" for a number of years now, with limited success so far. The current news industry is broken beyond repair: all you get are bite-sized irrelevant factoids. A good news service would be:

- Relevant to you and your interests...

- ... but diverse enough to feed your intellectual curiosity

- Delivered in a timely fashion: apart from once a year big events, most things can wait for a few days, no need to require you to read the news every day

- Include some analysis to allow you to see the big picture

When I started a few years ago, I thought naively that a little machine learning should do the trick. But the problem is actually quite complex. In any case, the sector is ripe for disruption.


Great problem. If I had time I would build a news aggregator with an unintuitive voting concept: Downvoting only, sort by -age*downvotes.

The goal is to have a system that avoids the rich-get richer effect, avoids false negatives (good content with bad rating) and in general a better correlation between votes,quality and clicks than upvoting systems.

I wrote a small simulation to test my hypotheses against HN and reddit scoring mechanisms, and it looks very promising.

Unfortunately I don't have more time to work on it...


Hey, just to give you some inspiration. There is a company in the Netherlands that tries to do what you set to do. It is called Blendle https://blendle.com/

The website is all in Dutch, but you can probably get the gist of it (I live in the NL but don't speak Dutch, but their mission is quite clear).


I'm also on a mission to make news better but with a different approach.

We're making sure journalists get the best tooling to make their work. By empowering them, we help them spend time on what's actually important: writing quality content.

Would love to exchange some ideas with you.


Oooh, this is a great one. You're absolutely right "the sector is ripe for disruption"

Your list of points is great, if you can figure out a way to deliver a service like that it would be incredible.

I think one of the biggest challenges for current publications is the tie to advertising model - advertising business model forces products to decrease in quality over time. Same thing happening to Google and Facebook, but super apparent in news sites. They're fucking awful these days, I can't read a single article without ten huge popups and a paywall.


I'm not sure to reply to the parent comment or this one, but as you mentioned advertising model, I'd like to reply here.

All the points mentioned in the parent comment has been done before: magazines and newspapers. (Some) people used to subscribe to multiple publications to get their intake of information. The wide ranging, impact based news is the daily publication specialty. The newest in your specific interest is the magazines' playing field. News special reports used to be in longform and discusses all the finer points, including analysis and graphs to see the big picture. Magazines with themes serves the intellectual curiosity.

Somehow in the age of niche creators, these companies die out. I think the saying 'the sector is ripe for disruption' is true, but not in the way of software or automation. Better business model is really needed. The business model has been done before; the evolution to bite-sized factoids is the consequences changing to more heavily advertising-based business model.

The limiting factor of paper space and physical distribution seems to strike a balance: news to be printed and distributed need to be worth it for the public to pay. Maybe bundling also made it work[0]. The specific 'small' niches in newspaper/magazines can be fulfilled by sharing the cost with the mass of subscribers.

There is a tradeoff in the wide influence of gatekeepers, but even in that time independent publications managed to survive.

I think finding this balance again is really the key. Should we go back to tax-funded publications? Or will people welcome a microtransaction for articles? Or should the publications deliver curated, less frequent summaries to make customers happy? I think the disconnect between the customers and paying for content is driving down the quality and demand (in revenue) of these publications.

The recent years have shown that subscribing to the publications themselves are not optimal. Putting up a paywall angers people, but The Guardian have never wiped their donation banner off their pages. The need to find the correct business model for publication is urgent for the masses too; democracy that actually follows the people's will depends on this.

I don't follow the current landscape, but what The Athletic is currently doing is pretty interesting for sports.

Me, personally, really like the 'The Espresso' concept from The Economist. They curate 5 stories each day and deliver it in the morning directly in the app. No space to switch tabs and disengage but space to dig deeper in the story through the links.

[0]: https://cdixon.org/2012/07/08/how-bundling-benefits-sellers-...


It's called The Economist, and it's really, really good.


Dental treatments, besides being very expensive, are often (up to 28%) unnecessary. This happens because no-one keeps dentists in check. I am trying to make dental treatment and diagnosis reviews easy, cheap, reliable and fast.


May you succeed with this endeavor.

An ex-dentist attempted to strong arm me into a receiving an occlusal adjustment because my TMJ popped during a single visit. I knew this permanent procedure is rarely the best solution for the scenario. The dentist subsequently became irate and told me, "You'll lose all your teeth and look like an AIDs patient!" You can probably guess what era he's from.

I wanted to file a complaint, but it would've been my word against his, his assistant, and his hygienist. Absolutely ridiculous situation. It also provided a snapshot into how medical professionals exploit patient ignorance for revenue.


Our philosophy has to key rules: 1. Diagnostician shouldn't treat. 2. No-one should review themselves.

This eliminates so much fraud and mistakes.


wow; interesting idea! but how do you prevent dentists from forming referral cabals and cheating the system?


Easy, we ensure that same dentists don't work on the same case. Patient can choose who will do the clinical examination, but can't choose who does the diagnosis (they can only set the minimum ranking position of diagnostician. Similarly, patient can choose dentist who will do the treatment, but this dentist can't change the recommended therapy.


I've been wanting to disrupt orthodontistry for quite a long time. With the state of 3d scanners, 3d printers, 3d software modeling, why hasn't the market price of orthodontist treatments dropped to cost-of-materials yet? As soon as at least one satisfactory / integrated open-source stack exists I think it's only a matter of time before it does...


I don't know if this is what you mean but in Japan many dentists have machines to make tooth crowns. Not sure how common that is in the west. I went to a dentist in SF, they did something and then I had to come back in 2 weeks after they made the crown. Been to several dentists in Japan where they could make the crown while you wait 20-30 mins


They do happen in the west as well. It is faster/cheaper to print crowns/etc, but I don't believe it's generally on par with an expert Dental Technician yet.


a dentist's expertise is hard to displace. not everyone can be bob mortimer...


The cost of material is only smaller part of the price. High price has more to do with the imbalance between the supply and demand. Supply growth is limited by the number of orthodontists. This is due to the fact that every orthodontic treatment requires human expert to oversee and manage it. SmileDirectClub is trying to disprove this last assumption, and we still have to see if they manage it.



Stack not especially necessary. From 2016: http://amosdudley.com/weblog/Ortho


Hi there, Thank you for sharing this very interesting idea, I would love to see it come to fruition.

I’m actually a dental student myself, and it saddens me that a significant chunk of dentists take advantage of the of the self-policing inherent to the field. It generates generalized distrust and resentment among the rest of dentists, in addition to being simply unfair.

As far as I know, there are no diagnosis codes in dentistry, just treatment codes. If it were, I imagine it could be possible to prevent this problem by randomly and routinely validating patients charts.

On a side note, it is a budding dream of mine to build a start up related to dentistry, particularly in the realm of dental informatics, but not limited to it. I was wondering if you would be willing to chat with me about your experience sometime. It sounds fascinating.

Thanks again.


Let's chat. You can reach me at tomislav.mamic at protonmail.com


I suppose it won't save me, but <3 for working on this. I need work done, but I don't have dental insurance, will have to pay out of pocket, and have yet to overcome the analysis-paralysis problem of finding someone who'll charge a fair price, do good work, and won't add any more holes than I need.


One of the reasons I keep working on this project is that I am in a similar situation. When I started researching this topic, I did a test. I made x-rays, and my friend dentist took dental photographs of me. Then I had sent these over email to 7 independent dentists. Recommendations I got where as diverse as the ones from "How dentists rip us off" article by Readers Digest. I haven't done any of recommendations except for 2 fillings that even I was able to recognise on the images. For the rest, I am going to use my app to find the best solution.


What's your timeline like?


I am not sure I understand your question. Could you be more specific?


Where are you on the march from nothing to a usable app (even if that's a beta) in months, seasons, years, or any other form that makes sense? :)


If you have a lot of work that needs doing, it may well work out cheaper to find a dentist overseas, fly there, stay for a period of time, and then fly home again.

For some reason I keep hearing about people flying to Serbia to do this.


Ha, analysis-paralysis is a good term. (Also need work done)

What work, generally?


Are you in the United States? If so, how do you get past the regulatory hurdles that is each state’s dental board comprised entirely of dentists?

EDIT: Sorry, I missed the reviews part. Do you mean easily getting a second opinion based on diagnostic imaging?


I call it "verified diagnosis". We use game theory to extract the truth. Think prisoner's dilemma for dentists.

Edit: Not in US, but building planning to launch there. You can't practice dentistry in US if you haven't got US diploma. However, diagnostic dental work (at least in some states) is exception to this.


Too bad there is no safe or easy way to get X-rays just by sending a home kit to customers.


I don't think that's necessary. Most people living in urban areas have x-ray practice in a close vicinity. Even for those who don't, a home kit wouldn't justify the cost. You want to make a standard high quality x-ray set 2 times a year. Professionally done. Most of people on the planet can make a trip to a city 2 times a year.


What kind of treatments make that 28% "unnecessary?" Regular cleanings too frequently?


It depends, here they say mostly fillings. But there is only so much implants you can sell to a person, and acceptance is lower.

Research that showed the 28% figure: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3036573


This is anecdotal, but I remember reading an article about a dentist who was convicted of doing expensive and completely unnecessary surgeries on many of his patients, to the tune of hundreds of thousands of dollars per patient in some cases. I can't find the article, unfortunately.


I believe Atlantic had such an article recently. Good starting read is "How dentists rip us off" from Readers Digest.

However, it's a bell curve. There are extremely moral and extremely immoral people. Some of them are dentists.


> There are extremely moral and extremely immoral people. Some of them are dentists.

Absolutely true. However, it seems that other areas of medecine have better systems in place to prevent abuse, and dentristry would do well to follow suit.


Yes, dentistry is in many ways different from the rest of medicine. It's kind of separated from it. However, that is not the source of the problem. It's the fact that average person uses dental care more often than any other medical care, and dental treatments are in vast majority medical operations rather than drugs.

Let's focus on the second part of that statement. It means that majority of cost of dental care goes to the practitioner, rather than to drug makers. This means they have more reason to cheat. The payoff is higher.


That's enlightening, thank you. I was unaware that the type of medical care (within a specialty) can change the financial incentives of the doctor. As someone who was just told to get my wisdom teeth removed, this makes me want to seek another opinion.


Hey this is cool. One of my good friends worked (is working?) on this problem for over a year. he’s a long-time dentist practitioner/owner and really keen on this topic. Maybe you two should talk? If interested, email me at bw2016 @ protonmail.com .


Thanks, I'll reach out.


the joys of a fee for service model! What approach are you taking? As a payer, is capitation better? Outcomes based payments?


We are not aiming to change the way you purchase dental services. Rather, we focus on ensuring you don't buy unnecessary dental services.

Let's say you are Delta Dental, these 28% are basically an insurance fraud. If you could get rid of it, you would save billions. You could offer lower premiums and full coverage without any copays.


I seriously hope you're able to succeed.


Thank you!


Persisting your OS state as a "context" - saving and loading your open applications, their windows, tabs, open files/documents and so on.

Started because of frequent multitasking heavy work with limited resources.

Open Beta (macOS) as soon as I finish license verification and delta updates.

https://cleave.app


I didn't know I wanted this until now, and now I really want it. I often open a ton of related applications, and then avoid restarting my computer because it's incovenient to reopen everything.

I'm on Linux, so I won't be able to use your app, but great idea and good luck!


I think taking this sort of context snapshot may be very difficult. If you assume no direct application integration. It would almost be like you would need a mechanism to operate in a partition of RAM where it could not interact with the current running context but stream all of the RAM in use to disk.

Also it'd would be a data integrity nightmare because if one context shared the apps from another. How would you manage memory corruption, and allocation and saving in this sort of scenario.

Anyway, sounds awesome.

Good luck


You can explore running everything in a Linux Container, LXD. Then freeze the LXD if you wish to shutdown, and unfreeze when you're ready to restart, it's how container workloads can be moved from one system to another.


Interesting, I'll look into LXDs. Thanks for the pointer!


Nice. I’ve thought for a few years now that this is the next big thing I want out of an OS and software ecosystem—suspend work session, resume book research session and personal communication session, suspend research session leaving comm session active, open Christmas shopping session, suspend, suspend all open sessions and load gaming session, and so on. Huge bonus if the sessions can be moved from one device to another.


I haven't really thought this out and I don't have aMac to test on but why not just use separate user accounts? Doesn't OSX already reopen every thing?


Separate user accounts is kind of the naive (and not quite complete) solution to the same problem.

Not really a smooth experience in my opinion, doesn't map quite as well to the concept of "working context" as I think of it. Also, you'd have to maintain your list of users, and manually sync any settings, etc. - whereas with Cleave, I'm planning on implementing white- or blacklisting of applications on a context-basis (and system settings etc. are implicitly shared).


I love the idea of what you're building - signed up to be notified for the beta!

> Separate user accounts is kind of the naive (and not quite complete) solution to the same problem.

I too have attempted to solve this problem with user accounts; and yeah it doesn't work well. Files are a pain to share, the log-out-log-back-in process takes forever, and a bunch of preferences don't sync across user accounts.

I particularly like the idea of having a super-low-energy mode where it's just for writing or reading, and saved states for my countless research sessions. Also, being able to freeze my dev workspace and resume it any point sounds amazing.

Excited to try it out!


This is cool! I wish I have something like this across multiple machines, although then there’s a lot of sync problems that needs to be solved


This looks great! I've often wished that something like this existed. How long have you been working on it?


Thanks!

The basic idea, on and off for close to five years. Started out experimenting with shell session persistence (solved[0], but not quite), then prototyping a browser-concept and playing with browser-extensions, then settling on the OS-level...

[0]: https://github.com/EivindArvesen/prm


Creating a [traffic simulation](https://github.com/dabreegster/abstreet/#ab-street) that's both realistic enough to generate results meaningful in the real world, but easy enough to use that anybody living in a city could use it to experiment with some change to cycling or transit infrastructure. Some of the problems hiding in there:

- Getting a representation of a city that cleanly divides paved areas into distinct roads and intersections, and understands the weird multi-part intersections that Seattle has plenty of. [This](https://docs.google.com/presentation/d/1cF7qFtjAzkXL_r62CjxB...) and [this](https://github.com/dabreegster/abstreet/blob/master/docs/art...) have some details about how I'm attempting this so far.

- Inferring reasonable data when it doesn't exist. How are traffic signals currently timed? No public dataset exists, so I have heuristics that generate a plan. If I can make the [UI](https://raw.githubusercontent.com/dabreegster/abstreet/maste...) for editing signals fluid enough, would it be worth trying to collect this data through crowd-sourcing?

- Figuring out what to even measure to demonstrate some change "helps a bus run faster." Should people try to minimize the 99%ile time for all buses of a certain route to make one complete loop over the whole day? Or to reduce the worst-case trip time of any agent using that bus by some amount? Or to minimize the average delay only during peak times between any pair of adjacent stops?

- Less technical: How to evangelize this project, get the city of Seattle and advocacy groups here using it, and find contributors?


I think a good way to define a bus running faster is to perhaps measure the average change in eta between any two stops across all stops per day, in other words the mean pairwise difference between original and changed routes. then maybe stratifying that for peak, off peak, evening, night, morning, however granular you'd want to go.

you can then think about optimizing routing for commuters, or late night service around a random event with increased traffic, etc.


I like this as a single aggregate measurement to base scores on. Some of the difficulty comes from displaying this score as the simulation runs through the day. At first the average might look great (a change helps the bus in the early AM), but then rush hour hits, and the change actually makes things worse. Or maybe a re-timed traffic signal helps buses going northbound in one spot, but hurts the southbound buses, so also seeing the change between each pair of stops is important. https://imgur.com/Fk1GfKG is what I've got for this so far, but I think a different way to visualize this is necessary. :)


Whoa that is super interesting. I think if you can somehow make a more simplified frontend for internet, it will greatly improve visibility. I'm not a frontend developer it might be able to have a simpler version completely rendered in webGL.


This is totally on my radar. I want to compile to wasm and target WebGL. I'm using https://github.com/glium/glium/ to target OpenGL on Windows, Mac, and Linux right now. I've been meaning to check out https://github.com/grovesNL/glow/, which also supports WebGL.


Cool, do you have real world data that supports your simulations ?


I'm pulling in trips (when does an agent leave, where do they depart from and go to, what mode of transport do they use) from https://www.psrc.org/activity-based-travel-model-soundcast, which is a very fancy model that uses land use, census, vehicle counts, etc. This is the most realistic data that's public, from what I've found.

Map data comes from OpenStreetMap. Aside from the problems I mentioned with inferring good geometry for some intersections, it's pretty solid.

Drivers in A/B Street have to find parking, and walk between buildings to parked cars. So the amount of on- and off-street parking in an area is pretty important; there might be lots of congestion in the sim if I don't know about parking. I'm inferring on-street from http://data-seattlecitygis.opendata.arcgis.com/datasets/bloc..., which is extremely inaccurate, and I have no data source yet for private parking attached to individual buildings.

Until I have some of this data, calibrating the sim against reality is pretty tough.


Somewhat of a departure from the rest of the current comments. I'm a dataminer/rom hacker mainly working on Nintendo handheld systems from the DS onward and I've recently resigned myself to learning the sort of ASM code needed to reverse engineer compiled assets (since obviously I have no way of getting a pre-compiled version of those assets). I've been looking into the legality of reverse engineering and I've always found it to be a fascinating subject.

All because I can't get a 3DS game to load modified videos.


That sounds awesome! Are these things you're doing for work, or on your own (or both)? I'm not much of a gamer, and I don't know what the market looks like for Nintendo systems...what are the applications for custom ROMs on DSs and other handheld systems?

Some excellent free reverse engineering courses for Windows[0] and Linux[1] were posted in another HN thread recently, and a couple other HNers and I started a study group for the Linux course. We have a Discord group, and we're meeting for the first time on Thursday, in case you're interested. (It sounds from what you're doing as if you might already be past those resources, knowledge-wise.)

[0] https://github.com/0xZ0F/Z0FCourse_ReverseEngineering [1] https://beginners.re/


> what are the applications for custom ROMs on DSs and other handheld systems?

Having played around a bit with a jailbroken Switch, the primary reason is to be able to either run mods for games (e.g. there's a surprisingly viable set of tools for converting/bundling Skyrim mods for PC for use on the Switch version) or to run homebrew apps, the common ones being emulators. Previously, I had played around with GBA emulators on PSP/PS Vita, but the Switch form factor and screen size are a lot nicer to play on.

That being said, you could also go the route of putting a completely foreign OS instead of just modified Switch firmware. Android seems to run flawlessly on a Switch from my testing; even though Android doesn't seem to be able to access the joy cons directly when they're attached, you can still pair them with bluetooth and they work fine even when they're slotted in. Android even seems to have pretty good built-in support for that kind of input device; you can easily navigate through the icons on the home screen, hit "A" to select whatever's highlighted, etc.


Fascinating. I'm going to have to do some reading about this -- it's a whole world I didn't know existed. Thanks for explaining.


| All because I can't get a 3DS game to load modified videos

You were trying to load VR porn on there, huh?


Flight search. I hunt deals for Aussie travellers (https://www.beatthatflight.com.au), but an inordinate amount of my searching for deals is manual.

- initially, all manual

- secondly, timers - I know when some airlines do deals, so I go look

- thirdly, I found other sites indexing unusually cheap flights, but they're not always the same price on my site

- fourth, built a script to search my own site for a route, but the number of combinations rockets with the increase in date ranges. If you're taking different stopovers etc, it becomes ludicrous.

- it's growing at least, but finding ways to make it less hands on and less mind-numbing is a never ending quest. Although I still enjoy it :)


That really is a hard problem. Here is a very good slide deck [0] detailing the complexity of flight search. It's written by an employee of ITA Software, who built QPX, the system that was bought by Google and now powers Google Flights. [1]

[0] http://www.ai.mit.edu/courses/6.034f/psets/ps1/airtravel.pdf

[1] https://en.wikipedia.org/wiki/ITA_Software


Haha, I very nearly quoted that deck in my summary, I've read it a few times. When people ask "why can't you just show the cheapest flights" I usually pull out their Chicago example. The computational complexity is just enormous.

I've found, however that for one way routes I can do fairly well, and for return journeys I have to just sample the space and hope that I'm hitting some good dates in there, currently.


I've always wondered how flight search sites develop a system for such a difficult problem with minimal API options and Google limiting access to QPX. There seems to be a lot of data fragmentation in the travel industry too. How do you automate under those constraints? Are there good alternatives to QPX?


Not easily. Sometimes just doing manual-esque queries using selenium(!).

There are rate limits on a lot of tools/APIs as well, so if I can narrow my search boundaries quickly with some hand tweaking, it helps a lot.


Thought your username looked familiar, I've seen your site on OzBargain many times.

I always wondered how/if finding cheap flight deals could be automated, and it's interesting to hear that it's as hard as it seems.

Just FYI, your logo and part of the menu disappear off the sides of the site background on my 4K screen.


interesting - thanks! Would you let me know what browser/OS as well, just so I can have a look?


For sure. Win 10 Pro x64, 4K screen at 150%. Just tested on Firefox Developer 73.0b6 (64-bit), Chrome 79.0.3945.130 (64-bit), Edge 44.18362.449.0, IE 11.592.18362, all with the same issue [0][1]

I think it's just that with the background image being 1880×1253, it isn't wide enough with background-size: auto. I just changed it in FF devtools to background-size: contain, and it looks fine.

[0] https://www.dropbox.com/s/f0ewee5b1rc1bnw/beatthatflightLHS....

[1] https://www.dropbox.com/s/pxdvr2o8o96f6x7/beatthatflightRHS....


I see this too - see https://imgur.com/a/029OQCn if the parent's screenshots are not enough.


cheers to you too!


champion, thanks!


As some of the other responses mention, this is an insanely complex problem...as evidenced by the large number of businesses in this space. In your eyes, what are the best flight search engines at the moment? Aside from https://www.beatthatflight.com.au of course ;)


azair.eu for Europe and Asia


Kayak?


I'm a big kayak fan for their flexible search, and am sad hipmunk is shutting down this month :/ Grabaseat.co.nz is brilliant for NZ specific deals.


The environment and how to lead people to enjoy acting sustainably long-term, so they spread that joy and sustainability to others, not coerce.

I've found a strategy I think believe will work -- my Leadership and the Environment podcast.

Here's the podcast: http://joshuaspodek.com/podcast

Here's an episode clarifying my strategy: https://shows.acast.com/leadership-and-the-environment/episo...

Here's my corporate strategy: https://shows.acast.com/leadership-and-the-environment/episo...


> "enjoy acting sustainably long-term, so they spread that joy and sustainability to others"

This sounds great, can't wait to check it out. After taking a Permaculture Design Course, I fell in love with the ideal of living a better, more "luxurious" life that has not only a lower impact on the planet, but a positive impact. I'm on my way down that path, but still figuring out how to make it replicable.


Interesting to me, your mileage may vary.

Working in my spare time on a command line terminal UI application that searches over source code and ranks the results.

It came about from watching a work college constantly opening VSCode when trying to find things in a codebase. I mentioned he should use ripgrep/silver searcher which he tried, but said he preferred to get more context and wanted ranked results. The context was possible using -A and -B but he didn't want that.

I had always wanted to make a terminal application and it seemed like an interesting problem to solve. I had also always wanted to implement BM25/TFIDF ranking algorithms myself and I was curious to see how well this could be done without pre-flighting and building an index.

Still a work in progress https://github.com/boyter/cs but coming along. Its usable now (with bugs) and is being used by my work mate.


Hey! Looks cool, I tried compiling it on Windows (at work ATM, hehe) but so far it does not seem to find any files. If I start it in a folder, the search always says 0 in 0 files.

Is windows a supported platform?


Yes it is, or should be. I’d need more details to really work it out for your case. I will have a look at potential issues in the next few hours.

EDIT I tried and no issues on Windows. I would need more details to investigate. Please raise the issue on github and I will investigate.


Looks promising and definitely that can have a huge commercial value. I'm curious about how you establish context for code-search? Also, how do you decide on how to rank the result based on a single string?


Looks useful. Would it be fast in huge codebases?


Yes and no. The interactive version appears to be because it displays results as it goes. However for pure command line, it will be fast but not comparable to ripgrep/grep/silver server or others.

I want it fast, but absolute performance is not a goal as its a trade off with any ranking algorithm. Not really the same sort of tool.

Anyway here is a search over my projects directory which is about 5.7 GB of content https://github.com/boyter/cs/blob/master/large-search.gif

Feel free to try it out if you are able to build from source. It certainly has some bugs in there over large codebases in the UI which I am aware of but does work generally.


This is super cool! Could see myself using this when I'm feeling foggy-brained and want to bash keys until something useful comes up. But I also just love how it looks; nice interactive TUI apps are always a pleasure to use. Good luck getting it stable!


Thank you. It is rather nice to use. The instant feedback is fantastic for code spelunking in my opinion.

The tui is indeed the sticking point and the thing people like the most. I had planned on sticking an embedded HTML page in there as well but that might be a bit much for a first version.


Characterizing the effect of near-surface humidity and wave action on Ka/Ku band satellite transmissions from a surfboard-sized autonomous swimming vessel. I have a little sensor platform, and customers that want it to do a whole lot more. Bandwidth can be hard to come by 6000 miles from the nearest human.

Also, working on how to integrate a small team of hackers into a big team of production oriented engineers. Making the first of something is such a different skill set to making thousands more.

I got here by getting headhunted for a neat-sounding job after a project elsewhere ended, and then assuming more and more duties until my title had to change to match my responsibilities.


I’m surprised there’s so much demand for that.

Also I bet you’re excited for starlink.


The ocean's a big place, and really hard to operate in. I'm looking forward to seeing how low power the terminals end up consuming!


Semi random question.

Is this related to the refractivity of the near surface atmosphere above the waters surface affecting directionality and bandwidth?


Yes, and a couple of other things- the water acts as an RF ground plane, so different sea states experience this differently; the humidity in the first few inches off the surface is enough to affect signal performance; and mist and spray from splashing affects things more.


I'm currently working on ways to disconnect better at the end of the day. So far, I've figured out how to create an "inverse Screen Time" on iOS so it locks me out of most of my phone except for a 3 hour window in the evening that ends one hour before bedtime. I've also began using timers to keep track of how much time each of my daily tasks are eating up.

Also the usual stuff. Hitting the gym (30 min a day, 5x a week), clearing out junk I don't need anymore, multivitamin, etc. 2020 is going to be the year of wellness for me.

EDIT: Forgot to mention this, DELETE YOUR SOCIAL MEDIA APPS. All of them. Use the mobile websites if you need to read them. Not having the icons on my home screen or app drawer made all the difference and really helped fix my cyberaddiction.


Same. Focusing on wellness this year and it is great. Have read so much more after forcing less screen time


My challenge to you is to try to disconnect from HN as much as possible. I logged off for six months last January to detox from the neurotic comments that this place gets and it totally reset my mind. I plan on logging off for another long period of time after I check tomorrow on some Ask HNs I posted.

Come outside, it's wonderful.


Why do you need to be forcefully locked out? From what you wrote, you seem capable enough to... put the device down and avoid touching it.


Not trying to change the world here... I just want a good way to look at my bottle caps

site: http://collectibleapp.com/

project: https://github.com/marcelaguiar/Collectible


I dig it. Problems don't have to be world-changing to be interesting.


i like this the bottle cap collection looks cool


I am working on a natural language parser using symbolic AI (no machine learning...). It's working a bit like a multi-pass parser for programming languages, but with the ability to handle multiple ambiguous meanings of a sentence at the same time. An English sentence is translated into an intermediate representation. Or rather, depending on the complexity, hundreds or thousands of intermediate representations for the same sentence. Then there will be several passes to eliminate interpretations, until it finds the most probable one. It's tightly integrated with a database for human knowledge to evaluate the different interpretations. The goal is that you can add data to and query from the database using English language. I started working on it 3 years ago, and there is still a long way to go. I have done most of the infrastructure (including a high-level programming language for pattern recognition that can seamlessly handle asynchronous database accesses) and I am close to completing the first pass....


I’ve been working on merging statistical deep learning with a knowledge graph like representation of the data. I run a consumer site focused on privacy and build nlp secret sauce underneath to serve users. Ping me at hn username @doerhub.com if you want to chat.


Good idea! I've worked before on trying to come up with a good intermediate representation for these kind of purposes... Linguistic theories are only partially helpful here. Want to have a chat? I really think there is a future in this approach.


Alleviating homelessness using technology and data.

I recently learned that homelessness is not just about the people you see on the street every day, but that homelessness is in fact a funnel that people fall deeper into as their situation becomes increasingly desperate. At the bottom of the funnel are the aforementioned group known as the "chronically homeless". The top of the funnel however, looks a lot different, it consists of people who might be couch surfing with friends, sleeping in cars or moving between motels. This group is known as the "hidden homeless". We likely encounter this group every day, at work, in the coffee shop, at the gym, but they look just like you and I so we fail to recognise their situation.

The "hidden homeless", at the top of the funnel, actually make up the vast majority of the homeless population. What's even more surprising is that this group overwhelmingly has access to technology, 90% have access to a smartphone or laptop with internet access.

The not-for-profit organisation I am involved with called Ample Labs (www.amplelabs.co) is working on developing chatbots to more rapidly connect this group with essential services. This allows us to get a better understanding of their behaviours, what services they use and how effective they are. This has two benefits - first by connecting the 'hidden homeless' with essential services quickly, we make it less likely that they will fall further down the funnel into chronic homelessness; secondly, it provides us with essential data that we share with cities to inform policy making.

The long term hope, is that by using data to prevent at-risk populations from falling deeper into homelessness we can combat the problem at its source and start to eliminate homelessness before it even begins.


Thank you for working to fix a problem that would materially improve people's lives. I have a lot of respect for that. Finding a way to have both a technological and social impact in a single project is one of my longterm goals.

Homelessness is a topic I know embarrassingly little about. What has the data led you to recommend to cities, policy-wise?

It seems like large-scale data collection should be (but isn't) used to inform many public policy decisions. Another area where I think data could have a huge impact is studying how the punishments that criminals are served in court affects their outcomes later in life, and requiring judges to factor that data into their verdicts.


This is great. As much hate as the topic of personal data gets here on HN, it has massive potential to solve alot of societal problems. Imagine a world of open data, but is utilized only for benefit. unfortunately, game theory skews the meaning of 'benefit'.


Cool stuff and looks like using tech for right purposes. Is the code open-source? I can imagine the data would be strictly confidential as it involves real people.

Any cities south of the border?


Where are you based? Here in dublin (ireland) there is a housing/homeless crisis that needs some resolution, that the feckless gov is unlikely to provide..


Honest question. What essential services are available to a mid 30s white guy in San Francisco? That was me 10 years ago.


I'm leading a startup nonprofit exploring policy solutions for America's 100 largest cities.

Of course, there are plenty of national and state-level policy organizations; some even dip their toes in the municipal policy scene. But in the cities, most gropus are self-interested or focused on a single issue.

We're trying to fill the gap with original research and projects that operationalize the research of others -- taking, for example, good research and popularizing it, developing components for model ordinances, etc.


Do you have a website? Public policy is an area that seems almost ridiculously unoptimized, so I'm really interested in any organizations that are trying to fix that.

This is the first time I've heard of doing research with the express goal of creating actionable plans, but now that you've brought it up, it seems like that's exactly how policy-focused research should be.


It's not so much laziness or lack of vision as it is different skill sets. Additionally, the big university-based research players (Brookings, Hoover, Mercatus) are all prevented from anything that looks too much like advocacy, so they stay away from things like model ordinances out of an abundance of caution.


Better Cities Project -- better-cities.org


That's really cool, I'm researching policy solutions for LMICs right now but the U.S. is home. If you ever want to talk about taxation in particular, would love to chat.


What is it called?


Better Cities Project


A meal planning app for weight loss. I lost 50 pounds using calorie counting a few years ago. The thing that frustrated me most was trying to come up with meal plans every week. It's quite tedious to constantly find new recipes if you get bored with eating the same meals over and over. The weeks I planned ahead of time, I lost weight quickly. When I didn't plan I would stop losing weight, sometimes for months. So I'm trying to build an app that automatically builds a meal plan for you that you can then tweak.

There's a ton of problems when you're dealing with food though. Calculating calories of a recipe you find online can be tough. On one side, it's a natural language problem to extract the ingredient, the amount, the unit, and the prep/notes. On the other side, it's a data/data matching problem, where you need good data on a ton of ingredients, and then need to pick a reasonable one for "1 cup of milk".

And of course everyone eats and prepares food so differently that suggesting meals they'll actually enjoy is hard without asking them a bunch of questions first.


Not to discourage you but this seems like a tough sell. Most people gain weight because they don't want to plan out or make nutricious meals. The strategy I've always used is to figure out some small number of meals and then just pick one of them. It removes any mental load for shopping and cooking and you only have to figure it out once. It would be useful to suggest very very quick meals that are Y calories or less.


It's definitely not for everyone! But I know there are some people who just want to be told "go to the store, buy this stuff, you're going to be cooking these things these days if you want to lose weight". The same as a lot of people don't want to put in the work to do calorie counting through MyFitnessPal/others, but some people (including me!) swear by it.

Worst case, I developed an app I like using more than MyFitnessPal just for me and I'll still be quite happy.


I'm guessing you already know about them, but you might be able to get some useful ideas from Eat This Much[0], who I think is the biggest competitor in this space. It's a very complex challenge -- good luck!

[0]https://www.eatthismuch.com/


Oh yeah, they're pretty wonderful! Luckily, there's definitely enough room in this space for more than a few competitors. Thank you :)


I love this idea! I need a version that would also incorporate batch cooking so that I save time on cooking and make it easier to reach.


Oh yeah, that's pretty soon on my feature list! Something like "you're going to need to cook these nights, these nights are leftovers" and give you options to choose those and bias results towards eating the same thing every week day, etc to support things like Meal Prep Sunday.

So many of the other tools are just unrealistic for how I cook, which is cook like 4-5 times a week max and never for breakfast or lunch (unless it's ahead of time).


I'm working on a personal project that allows you to add notes to youtube videos, and be able to skip quickly to specific sections.

I started this because I'm learning guitar mostly from YouTube, and I find myself constantly seeking videos to specific sections.

I'll probably launch the site on ShowHN soon. Feel free to DM me if you can think of other uses for this, or if you're interested to know when this launches.


I want this for annotating drone footage that our customers post to YT so that I can have slack conversations with support and engineering more precisely.


I'd love to have this, specifically for YT guitar lessons. Please let me know when it launches!


Definitely interested and would love to be a beta-user.


to be honest, just an off-brand youtube mobile app would do it, so I have control rather that goog.


interested in beta if you don't open it up to all, my email is in my profile.


Am working on a travel site focused on showing places with activities and having detailed information from how to get there to pricing since most of information on google is outdated. I had stopped working on it after reading some threads here on why disrupting travel is very difficult but seems easy to a texh person looking for ideas and I almost shelved it. Then I remembered most commenters on here are from western countries that are probably reaching a saturation point of ideas in that almost any app imaginable already exists. Anyway, the renewed interest has made me start from scratch trying to collect the data. Anyone who would be interested in joining me and be the coder while i focus on the rest, i’d be glad to team up. Focus is on building an extensive trip planner which you can view a location and add to “cart” and at the end see the total coast of the trip. This can be helpful a lot for solo travelers. East Africa is still a popular destination. Reach me on ninaformke at g mail


This problem will only apply to very few people. But over the past 2.5 years I've been tracking everything and deriving insights on my activities. This produced some astounding results. e.g. Chewing gum makes me more comfortable in a conversation. Recently I've found a community of robot-like people on reddit who also do this. So I decided to build a platform. It's still in its early stages but feel free to check it out: simplelifedata.com


I just started trying to track data about myself for the same reason, but I've run into the issue that there are basically infinite variables, and I don't know which ones to choose. How have you approached that problem? You must have been collecting pretty granular data if you were able to associate chewing gum with comfort in social situations.


Can you point us to the community?

Any other insights you’ve discovered? I tracked my data for a year and discovered strawberries were a potent mood enhancer.


The challenge for me has been habit when I try to do this. If I have to manually break context and focus to log then I don't wish to. I want to monitor everything but passively without having to do any work.


I would like to have something like that too.

A centralized place to record things like that: - Arts: cinema, theater, books, ... - Travels: from, to, flight? car? - etc...

Started writing some code but let it for other things, maybe I would come back.


I’ve been thinking about this as well and even got bodmark.com to push out a prototype. (body benchmark) Let’s chat and maybe even collaborate.


I really liked the idea, just wish there could be a demo, or even a preview without the need to sign up. It would make a better onboard :)


I like the sign up flow that simply uses a Google login, personally. And its going to need some way to keep track of a user so they can update on mobile and PC, etc.

Mostly I agree on hating things requiring a login but I'm okay with it in this case.


Nice! This scratches a bit of an itch I've had for wanting to track some simple datapoints / personal log.


We're (team of 2) rewriting an old enterprise ERP system made of ~1M of C89 non-portable loc, tens of thousands of handwritten PLSQL loc, thousands of business rules carefully abstracted in sql data, tens of complex screen designed and scripted on a no-name RAD software that was the current fad back in the days, and some companion pieces in VB6 because, you known, 'C89, not anybody can do it'.

That's fun.

The new system is a quiet simple SOA arch with a dull, only-real data db layer, backend in Go with code-generation, frontend in es6 migrating to elm.

That looks the IT guys have when we say 'no really, we don't need IIS or Java', its priceless :)

The interesting part actually lies in handling both product management and sales for the new version while handling the day-to-day coding part.

Sometimes I think I should write a book on those subjects :)


I'm also in the planning stages of re-writing ~1M lines of C doing hardware control for a telecoms system. It's going to be replaced with Elixir (basically Erlang) because it's the perfect fit (which is unsurprising given its a telecom system).

My prediction is that it will be 10-20k lines of code when I'm done because there's so much obsolete cruft to remove. Plus ~1k of C as a shim layer to allow an incremental transition.


Nice, glad to know there's other brave souls that choose the Big Rewrite path, despite the latent idea in our industry that every one of those projects are meant to fail.


My approach is that there are a few things that the default answer for should be "no" and then you have to justify (maybe just to yourself) why they're appropriate in this case. Macro Fu in C, template meta-programming in C++ and rewriting from scratch are all examples of these.

In this case I am avoiding the 'throw it all away and start from scratch' approach. It would be infeasible for the intervening period. I am putting together an approach that would get us there in a year or so, but we can lop off smaller chunks to rewrite (the existing architecture is a series of daemons, which helps us there).


Personally what's interesting for me is:

1. (Assuming you're an outside contractor) How much do you charge?

2. How do you do estimates on this project? Or do you just do it bit by bit?

3. Are you switching over part by part or will deploy in a period?

If you can, you should at least write a blog post. A book will be amazing though!


I'm a simple employee with some management duties, it's small company (20p), I estimate my wage to be on the low end of the local market (french, not in the capital), but the position is quiet unusual so it's difficult to compare.

At the beginning there was only vague goals, not estimates, now 4y later I can tell how much days a functionality will take within a 30% margin error. But it's only based on personal appreciation, tightly coupled to having personally designed the new arch from the start.

The migration strategy that I chose was to hijack the sales activity and pick which new projects should go on the new version. That way we can stay away from replacing years of specific functionalities for picky customers until we have a good idea of how much the business module is ready for prime-time.

I should definitely write a book, there's so much to say - maybe in 2020! :)


Definitely quite a unique position compared to the common rewrite story. Small company, long-term rewrite timeline, small rewrite team, able to influence the sales activity and deployment. I think these constraints are important to your success (and fun factor), but I don't have the complete picture. I'm really welcome to any points against what I wrote above; currently I'm intrigued on reading on how software projects works or doesn't work in real life and your story is really interesting.


That sounds super interesting. You should absolutely write a book...or, if not that, a blog. I'd read it.


/me gathering momentum...


A while back, I posted a list of my long term focus problems here [1]

Short list:

* Pollution and the climate

* Privation

* Avoidable death

* Interplanetary settlement

* Liberty and communication

* Transportation

My primary focus is developing and commercializing reliable clean energy, because I believe that is the most effective way to further progress in the majority of the above problems.

To that extent, I've come to terms with an inability to spread my focus across all of them simultaneously and drive great results so instead I've taken an approach of working on a few of them full time myself and investing in efforts that work on others. My intent is to keep ~100% of my net worth invested in these main problems (either in my own or somebody else's projects) in perpetuity.

In my personal life, I've also recently been spending a lot of time thinking about health and purpose: how to build discipline, how people can/should decide what to do with their life, how to stay healthy and built fitness, etc.

Side project: in my free time over the last few weeks I've also been thinking more about how to create lasting models for information and media, and so I'm building a markup language / static site generator in that pursuit [2]

[1]: https://jwmza.com/long

[2]: https://jwmza.com/polymath


All those things are pretty much industries, let alone fields of study.

Also, consider that "keep ~100% of my net worth invested in these main problems" may not be the better strategy for funding those problems, versus "put net worth in growth investments" that can fund the same problems later on.


You have a good point but of course the split is adjusted to that consideration.

I don't believe in profiting from life-saving medication or anything like that, so my intention is to drive rapid growth with my activities in the energy, software, and automotive sectors to fund the less immediately profitable goals.

Make money solving the problems I can first, and later use that money to fund the further-off ones.

Currently this strategy is working well, as my assets are growing at above 2000% annually.


I'd love to chat -- we have a lot of common interests. My email's in my bio if you're interested.

I particularly like your idea of keeping your entire net worth invested in the problems you're focused on, and I'm curious to hear more about it.


Are you Elon Musk?


No I'm John Morrison, it says right there in the username. Usernames never lie ;)


I am working on something I am calling "compact models" (as part of a PhD): techniques to pack more information into Machine Learning models when their size is constrained in some way. I have put up some of our work here [1] - it has been interesting so far, and the results are promising. I would like to release a Python library soon, well, ...in a few months - my PhD is part-time, I have a full time day job and time management is a pain.

[1] https://arxiv.org/abs/1905.01520


Interesting name. :) I worked on developing compact models for the first few years of my employment at NXP Semiconductors, which is (for me, at least) the top result of a Google search for "compact models", defining them as "mathematical descriptions of semiconductor devices used in analog circuit simulators".


You're right, thats what the term seems to stand for (I had Googled too, when I was thinking of names). "Compact models" is something I am considering calling this work - its by no means standard :)


I'm writing a 2D animation library inspired by 3b1b's manim. It's written in Haskell, fairly well documented, and is meant to be used together with external tools such as latex and blender. Design concepts with examples: https://reanimate.readthedocs.io/en/latest/glue_tut/

Source: https://github.com/Lemmih/reanimate


standardized tracking for shipping containers

currently tracking is limited either by A) type of transportation (ships, rail, trucks) or B) by the Freight Forwarding company.

If you use multiple freight forwarders, you're stuck entering data from PDFs into spreadsheets to create your own custom usable dashboard.

If you use one freight forwarder, you have access to the main journey points, either as a spreadsheet, or if they're more sophisticated, through a web app. But I've only found one (silicon valley backed) Freight Forwarder [0] that gives the last-mile data -- e.g. last free pickup dates, pickup numbers, last free dropoff dates, return locations etc. -- through their web app.

This is critical for managing warehouse operations, especially for companies that handle their own last-mile (like we do), and it's been an absolute pain as we've scaled.

0: https://www.flexport.com


Im aggregating stock chatter from the worst parts of the internet (wallstreetbets, 4chan), summarizing it and sending it out as a newsletter here: https://topstonks.com


Why is this a newsletter that you need to give your email address even to just sample the content? It would be much nicer if the website showed you the content with an option to subscribe to the newsletter or rss feed.


Hey sorry for the confusion! There is a link "read a sample" that you can click on. Ill make that more prominent.


Aren’t there some legal hurdles to providing investment advice like that? Didn’t find the standard disclaimers on your site. You might want to look into it before showing up in SEC’s Radar.


At this point were just covering what is being spoken about, not advising any route of action. I will look into this further and see if we need to add a disclaimer. Thanks for bringing this up :)


Currently working on 2 projects which would solve problems for me:

- a jdbc driver for interacting with google sheets

- a cross OS application which lets you share easily data

The first one is almost done and requires mostly documentation and some clean-up. It supports at the moment simple SQL queries like select * from, insert into foo() Values () and an update where I currently not remember the syntax. It also has already a Datagrip integration.

The second project shall work wireless and with minimal setup. The original idea was that devices search each other in the local network (via broadcast) and connect then. Further ideas which rised while development where:

- play sound on another device (which I initially thought would be super easy but seems like it is not)

- provide a possibility to define outside applications (like you provide a configuration file how I communicate with your application and this lets you show information on other devices)

- Not just device-to-device but also something like groups

- messaging with other devices

- more communication possibilites, ie via (outside) IP or Wi-Fi direct


Downloading has been resumable since HTTP/1.1 but people uploading content have a less reliable experience. Worse: typically upstream bandwidth is much lower so they are exposed longer to unreliable connections.

Trying to fix uploading through tus.io (low level protocol) and uppy.io (user interface). Both open source and free to add to any project.


I’ve been using uppy with Transloadit on a recent project and it’s completely changed how I think about file uploading from the web. It’s no longer a huge undertaking. Love the service, amazing work. Now if the video/concat robot was fixed I could use it 100%.


Writing Mathematica software with Stephen Wolfram's support to extend the hyperoperators beyond exponentiation - tetration, pentation and so on, from the natural numbers to complex numbers and even matrices. I do this by extending the iteration of any smooth function to real and complex iterates. http://iteratedfunctions.com/ and http://tetration.org/. Physics has two mathematical methods for it's representation, partial differential equations and iterated functions. My work is more general than physical systems or even the universe because I can consider both measure and non-measure preserving systems. I am looking at AI applications as a system that is tuned to solve for physically possible models.


I won't pretend to fully understand this, although I do understand the basic theory of higher-order hyperoperators. I'm curious, though -- where/what are the applications for this?


The Universe is a hierarchy of orders; quarks and gluons, atoms, molecules, cells, multi-cellular and on. So hyperoperators form a natural hierarchy to model multilevel systems. Feynman's Path Integral with the integral removed is just tetration. So I believe the higher operators through self organization/renormalization are directly associated with specific levels of reality, but except for QFT I have no suspicion as to which hyperoperators might be associated with levels of physics. Here is how it might work. Hexation might model chemistry while heptation could model simple biological systems. Thanks for asking. This is fun stuff to work on.


That explanation was surprisingly accessible, thank you! It also blew my mind. I'd never thought of the building blocks of the universe as forming a hierarchy before, although in retrospect, it feels obvious.

Were you being literal when you gave the examples of chemistry as hexation and biology as heptation? If so, why are they those levels specifically? Or were you just using those as examples because they're roughly one "step" apart on the hierarchy (i.e., molecules -> cells)? Sorry if this is a dumb question.


I strongly believe that quantum field theory is at the level of tetration or pentation. The others were possible examples. Thanks for the question.


Most folks have been reading about the psychedelics renaissance.

A humongous problem is the absolute lack of data that many psychedelic assisted therapists, guides, spiritualists have, to be able to point to their specific types of therapy as effective.

You come across folks that make humongous claims about the specific modalities they use, but don't track the progress of their patients and therefore don't have the data to prove it. We're working with volunteers at Tabularasa.ventures to develop some simple applications to both screen clients and also allow for practitioners or individuals to record progress (reductions in depression, PTSD, etc.) over time whether treating with microdosing, self administered, or more standard psychedelic assisted therapy (PAT) methods.

Happy to collaborate -> marik@tabularasa.ventures


In the Caribbean, because of scamming and fraud, opening an account at any financial institution or like place requires several documents and oft making a visit to an authority figure (Justice of the Peace) to verify address and for character reference. On average you will need your TRN (Tax Registration Number (SSN basically), some form of photo identification, address verification, references, proof of income) and the list can go on depending on what you're trying to accomplish.

We're basically trying to make an opt in service that can make procuring these relatively painless by grouping all relevant parties and then keeping these on record. A glorified KYC of sorts and then looking to use these as means of authenticating (I should be able to use my profile to sign up anywhere or transfer to my data (or parts thereof) to another party. Lots more to flush out but we have a good grasp of where we're going and what we want to achieve. Our government has tried to do this in the past but failed at getting it past the courts due to privacy concerns and are set to try again. I skimped on some details but the idea should be clear.

As well as new data protection laws introduced/proposed with more amendments to come, it's a simple but interesting problem at this point in time in navigating everything including how we do our own verification, security and eventual licensing to achieve the desired outcome.


Where in the Caribbean are you?

Wouldn’t that just make a juicer target since a hacker would have one place to go?

I notice a lot of people help each other against the perceived government inflexibility so they’ll lend ID cards etc. won’t this just increase the amount of identifiers leaked when they do so?


I'm building a tool that helps people scaffold React apps really quickly (everything from auth flow, payments, DB, form handling, etc, to an actual nice looking UI). It's at least interesting to me because I think a ton of time is wasted on all this and I'd like to help more people get their idea out there rather than reinvent the wheel. If you're interested in taking a look and giving feedback it's https://divjoy.com


I really like this idea and the execution. Great job.


Thanks Tyler!


Building a library to deduplicate data at scale in Apache Spark, where there is no unique record identifier (i.e. fuzzy/probabilistic matching).

https://github.com/moj-analytical-services/sparklink

It's currently in alpha testing, but the goal is for it to:

- Work at much greater scale than current open source implementations (the ambition is to work at 100 million records plus)

- Get results much faster than current open source implementations - ideally runtimes of less than an hour.

- Have a highly transparent methodology, so the match scores can be easily explained both graphically and in words (it isn't a 'black box')

- Have accuracy similar to the best products on the marketplace.


I'm currently working on a solution involving larger data sets to match a record with a binary score (0/1). I'm using Redis with the Bloom Filter module. This works in that the query results are sub-second, but the data ingestion/filter population part is quite slow comparatively (<100 MB/s). Another block for me is if having to use multiple filters to query across multiple sets which just multiplies all the resources needed. Does Spark have any advantages or specialized filters for this use case? (I have nil experience with Spark, but am ready to dig up if it would really help.)


I've had an idea for some time now to create a website that would act as a better codereview.stackexchange.com. It would incorporate some of the features of the GitHub Pull Request system like inline commenting and reactions.

I arrived at this idea from two directions. The first direction is that I sometimes try to code review some of the questions over at CodeReview SE, and the whole thing feels unergonomic. I dislike scrolling up and down to check the code and constantly losing track of the things I'm reviewing. This is where I think inline commenting would help. Also, there is not a lot of room for discussion. You only get those comments below the review, where you only have a few characters to argue your point. The second direction is that I produce code snippets (programming homework, short snippets at work, etc.) which I would like to submit for review. I don't always want to submit it to the entire internet for review. I just want to get a private link to the code review, which I can share with my colleagues so they can review it. Kind of like a reviewable PasteBin.

Some of the features I would like to add: importing files from GitHub for reviewing and users could import their unanswered CodeReview SE questions for another review.


I thought about the same thing! I think having your code reviewed is one of the fastest ways to improve and at the same time, it's currently very hard to have it done.

I really dislike CodeReview SE for many reasons though, I don't think this Q&A format is suitable for doing CRs


Hey, what kind of things do you dislike about the CodeReview SE and what would you like to see in an improved code review system?


I believe I could be working in solving bigger problems, but first I need to focus on my mental health, a healthier work-life balance, and providing the primary income for my family.


Don't discount how important those problems are! Kudos to you for being intentional with them, and good luck. I wish you the best.


Thanks!


Two thumbs up, my friend. Best of luck with it.


Thank you, I appreciate your comment.


For hobby development, I'm trying to speed up the unofficial PlayStation emulator on the Nintendo 3DS. There are all kinds of interesting problems there, like SD reads being so slow that it tanks the framerate any time the emulator hits the disk (so I might be writing a read thread + precache?), and some apparent room for hardware-specific optimization in the lighting and blending routines.

It's been fun to work around the constraints on an underpowered device. It's also an excuse to learn ARM assembly, and a nice break from all the JavaScript I've been spending my time in lately!


Building a personal budgeting system that reduces the complexity of the process to paying attention to 1 number and about 5 minutes per week to be sure you're on budget all the time. I came up with a solution to this problem about 5 years ago and have been testing iterations with friends and family. In process of building an app to manage it for me.


I got budgeting down to two numbers: Total take-home pay past 12 months vs total expenditures past two months. Try to avoid the latter going over 85%.


It's awesome that you have a budget, and a process for doing it. Would you find it useful to know whether or not you can afford something you're about to buy, be it a taco, a couch or a car? Your budget works well in as a reflection process, my budget gives you the constant knowledge of how you're doing right now and how you'll probably be doing in the future.


I use a spreadsheet with rows bulk spending categories and columns months, plus a few summary columns. More than 15 years ago it was just a table on graph paper. I know within a few dollars how much I have earned spent in the past 12 months. The 'budget' is awareness of such spending and trying to keep not much more than the previous year. Stuff happens like a dead car, hospital stay, job change etc. so its not always firm.


NIR spectrometer for assessing ripeness of wine grapes. It is palm sized device that you take to the vineyard and by scanning many bunches you get numbers you need: brix, pH, acids.

There are many research papers talking about it for many years, but till recently there was not cheap enough hardware available so it just get stuck in university laboratories.

It is still tough project to pull out as it combine hardware, cloud software, machine learning and there is quite some laboratory work required as well. Doing all of this as a single person and bootstraping is extra challenge but I guess I don't know better.

I've got into it 5 years ago, when I decided to quit technology, bought small farm and build winery. At first I wanted to analyze the wine itself, basically to make traditional method obsolete, but performance of this kind of instruments are not good enough for liquid that complex. It turned out grape analysis is much easier target to tackle.


Problem:

The hassle of splitting proceeds from a service/event/product sale after the fact related to sending/collecting your % share, timing and details of wiring the proceeds.

Solution:

Pre-set allocations and create a customized checkout so that splits happen on a per-payment basis. Members dont have to wait to get their share.

https://www.korabo.io

Idea kind of came about after watching my wife, who is a yoga instructor/ studio owner try to split proceeds from a workshop she hosted with a few collaborators.

Another example: allows you to create a shield to a checkout that will split proceeds on a per payment basis.

https://github.com/surfertas/deep_learning/tree/master/proje...

Working on this on my spare time. Any advice from the community would be greatly appreciated.


Is this using Stripe Connect behind the scenes? They are trying to solve a similar problem. I tried to register on Korabo but after I signed up and went to login I got an error saying I needed to confirm my email. Haven’t gotten the email confirmation though, so something may be broken into your onboarding flow.


Thanks a lot for the response. Yes exactly, its using Stripe Connect and related APIs.

Basically just leveraging a lot of Stripes features which are great. Their support has been super helpful as well advising on what can and can not be done.

Noted on the onboarding issue. Will check whats going on there. When I signed up myself, gmail classified it as junk/spam, possibly rightfully so.

Thanks again.


I'm working on figuring out four different ways for somebody with a light background in electronics (basic soldering really) and has a Technician or General class ham radio license to get on the air, from scratch, for $100. No added expenses. It's possible, and a fun challenge. Its research for some writing I'm doing.


What kind of band are you thinking?


I wrote a browser based simulation game on my spare time:

http://aperocky.com/prehistoric

It's already got a pretty sophisticated production logic, and also a unified market.

Looking to add a few functionality like child support, new resource types and maybe eventually a governmental system. Can even try out different government strategies.

If you have any ideas please share. It's been my passion for 2020 so far.


On the first turn I ran, the first person I moused over had my aunt's name.


All the people generated seem to be females with Anglo/Western names.


One of the area that can be improved...


I started off seven years ago wondering why backlogs were so bad. It seemed like both small teams and large organizations always suck at them. I had read tons of how-to books and watched lots of videos. Many of the instructions conflicted with one another, however. What I wanted to know was why, not how-to. If I understood the why I wouldn't need the how-to.

Being a good hacker, I pulled at that thread until I had another, and another. Now I'm writing about semiotics, language, lambda calculus, and philosophy of science stuff. It's all related to my original quest for a better explanation, and it affects everything from AI to coding practices. I'm about done now. Now the trick will be getting it all in a format that's consumable by the average programmer.


I'd love to read some of what you've written, if you're willing to share.


I work on software that's used by NASA (and other organizations) to model spacecraft missions. This project spans the gambit of interesting problems in computer science: numerical methods, high-fidelity orbit modeling, orbit determination (using Kalman filters to estimate spacecraft state), complex 3D visualization, language parsing, IDE design, and many more topics.

It's definitely one of the most interesting projects I've ever worked on!


Which project is that ? I'm also looking for something of this sort ..



Helping fintechs and other startups access temporary cybersecurity defenders: https://www.getblueteam.com

Having run a cybersecurity services business for three years and previously working for federal clients, I know that government and large banks are sucking the talent up, leaving fintechs two options: ignore security or overpay.

On the reverse side, there are lots of talented independent providers who simply need somebody to vouch for their skills. We meet with and vet everybody on our platform to make sure they have the capabilities.

Will be launching a prototype to replace this landing page shortly. If you're in the New York area and are either looking for cybersecurity contractors or looking for a project, I would love to get your input!


Company alignment. I am working on a systematic framework ("way to think about," "a way of doing things") to establish company value alignment.

Most companies at one point are internally not aligned, marketing fighting product fighting development fighting design fighting sales.

All are wanting to contribute value, all hindering each other in the process of doing so.

The goal is one framework where a) an initiative can start from any group/team/individual within the company b) every other part of the company can rally behind - with their own expertise and point of view.

I always start with a talk (gave the first about it last Thursday https://jtbd.ws/), then I take it from there.


This sounds really interesting and as a marketer, I am constantly butting heads with other departments simply trying to my job. I'd love to hear more when you've developed the framework!


Working on the problem of "blue" light affecting circadian rhythms and sleep. We launched our MVP, Bedtime Bulb [0], in 2018, and it's now the most popular product in the category. We're expanding out of North America to Europe in the next couple weeks.

We've had a ton of great feedback from customers, and we are working on several new sleep technologies that we plan to release this year.

It's also been interesting to apply the lean methodology to hardware. Iteration cycles are long, but I'd argue that lean is just as important for hardware as it is for software.

[0] https://bedtimebulb.com/


The issue I have with this is there isn't a consensus about what and if any of these products work. The IES (illumination engineering society) has released a couple articles about why consumers should be weary of products that claim to control the rythms and after attending a talk from the IES on the subject I'm not convinced any of it works (at least to the degree marketing says it does)

https://www.ies.org/fires/circadian-lighting-an-engineers-pe...


I've had a few discussions with the author of that article. Not intending to put words in his mouth, but I believe he is effectively saying 2 things:

- The metrics Circadian Stimulus and Melanopic Lux are flawed

- We don't know enough to produce products yet

I agree with him on point 1, especially about CS. The author, and several others, have demonstrated a huge discontinuity around 3500K. The CS model needs improvement.

Regarding melanopic lux, the author says it is more a problem with our measurement and modeling tools. Most manufacturers won't provide a Spectral Power Distribution, but this is starting to change (we do). There is at least one clinical trial in progress that is testing this metric directly, and several experts think it shows a lot of promise.

With that said, new research on blue vs. yellow/orange opponency is coming out. Basically, we're trying to figure out contribution of the visual system on top of melanopsin. I'm not sure what to make of it yet, but I think we'll find out much more in the next 2 years. We probably need a better metric than melanopic lux.

On point 2, I agree that our knowledge is changing. But it takes a number of years for standards to be developed, much slower than the rate of technology improvement. We are paying attention to all the new research, but we based our design on our best understanding of the research to date.

We know that we can really only attract early adopters at this stage who "get" it, and that most people are going to wait and see until things are more standardized. But we are effectively able to do mini-experiments with our customers, which can lead to more insights/avenues of experimentation.

Case-in-point: f.lux. Research has shown that it is somewhat effective on its own, but combined with dimming, it could be quite effective. That's actually what we're doing with our product—controlling both the spectrum and intensity. And f.lux is able to run experiments with a portion of their users to advance our understanding.

Basically, my opinion is that we have to start somewhere, and we believe we have enough evidence supporting that our solution is "better than nothing." We were able to prove that there is enough of a market for these types of solutions already, even though our knowledge is still evolving.

(edited for a paragraph spacing typo and a 2nd time for a grammar issue)


May I also suggest your company to look into the opposite, basically extremely bright lights. I'm picturing one bulb that can go extremely bright in the morning for things like SAD, and then adjust throughout the day.

You may think extremely bright SAD lights is already readily available, but there's anecdotal evidence to suggest they are insufficient

https://www.lesswrong.com/posts/koRZu53LBZEapwww6/could-some...


I have had sleep issues my entire life. I've cut out sugar, caffeine (both for other reasons), and used blue light filtering applications on my devices. None made a significant difference. I think another issue worth looking at is people not getting a daytime light signal. I've purchased and ultra bright light that I saw in this post [1] and it seems to have helped more than anything I have tried before.

1. https://www.benkuhn.net/lux


Have you tried a cold shower/bath followed by warm bed, or a hot bath followed by a cold fan?

Big core temperature changes usually make me feel sleepy.


Yep, reducing blue light at night is an important part of the problem, but it's not a panacea. We realize that everyone has different sleep needs, and much of our focus this year will be helping people understand what inputs have an effect on their sleep.


Using constraint programming to schedule generic experiments in an automated lab. Experiments are complex and fragile so we expose a dsl for describing the constraints and objectives of each task so that the biology/chemistry doesn't go awry. One of the hardest parts of this isn't the optimization but the upfront work of defining what is/isn't necessary to be able to encode about an experiment. You want the api to provide as much control as possible without allowing the author to over constrain the problem, or introduce irrelevant steps into an experiment just because that's how they're done by hand.


That's really cool! I just started learning about constraint programming using Z3, and scheduling is the first area I'm looking at. Do you have any advice or resources you found useful? Much of the underlying theory goes over my head, but I'm not sure how much of it I need to be productive.


I wish you all the best working on this problem, very challenging and worthwhile in my opinion. I've worked for 6 years in (mostly unautomated) chemistry and biology labs, and know how finicky experiments can be (in expected and unexpected ways...)


Paysapp, a worldwide payment system that allows you to send money as simple as texting a message like "pay 100 to George", available right now in Whatsapp, Telegram, Keybase, Matrix, Discord, Slack and Twitter, just add the Paysapp bot to your chats and type 'help' to start.

We're at a very early stage and looking for investors.


Which countries is it available for? Are you compliant with all the complex and sometimes tiresome regulations on cross border transfers?


Worldwide, all 200 countries wherever there is internet.

Regarding regulations we are studying two possible scenarios, comply with all or simply Uberize the model and let people sell money for a fee. For the former we need tons of money, attorneys and on/off ramps with the banking world. For the latter we only would be the messaging transport and people would buy and sell money informally, that's the long tail of the unbanked and informal merchants.


Wrapping Bitcoin trustlessly for use in Ethereum smart contracts. https://tbtc.network


I've been following your project — very interesting and exciting.


I built a chatbot to fight loneliness and social isolation for seniors. It started with my grandma. She doesn't have a smartphone and internet, so it basically transforms photos into real postcards. She receives it directly in her mailbox, and it makes her really happy. The chatbot also reminds me to send when I did not, so she keeps updated regularly. I released it to the public last week. you can find it here https://postcard.im (the link open a fb messenger conversation with the chatbot)


Reimagining what a phone interface could look like. If you have an interest in this too, send me a note. (markkinsey@gmail.com)

I just like reimagining things, trying to elucidate first principles and go from there.


Amateur product designer here, I'm really interested in what you would consider to be first principles when it comes to expected appearance/behavior/functionality of a phone, especially nowadays when the "phone" part of such a device/interface is almost an afterthought.

I'm also interested to hear what you think are the shortcomings or limitations of our current idea of a (smart)phone interface.


I'm working on scaling up a network of devices connected to our laboratories aboard the International Space Station for K-12 education. Our 7th mission launches on the Cygnus resupply NG-13 on 2/9.

As we connect classrooms and scale across different countries, the problem set has grown exponentially.


I'm making YouTube content to help people learn Google Cloud and also prepare for the GCP certifications: https://www.youtube.com/channel/UCIGDDqu5DzlaaC4XzXj_4-A

There is an unlisted sample video in there that I've put out for feedback, and I'm making changes based on that. Will also be putting together related content around GCP.

p.s. do subscribe to the channel.


Hey, I wasn't sure where to leave the feedback so I'll leave it here: it might be a good idea to get a microphone to improve the sound quality and clarity. It's a bit hard to hear. Keep it up!


I'm trying to finally learn parsing properly. I run into a lot of little problems in my day job and have a lot of ideas for side-projects that I think would be served by having a better handle on it. So I'm creating toy languages and writing toy parsers for them.

One reason I'm targeting parsers in particular is because I've been finding a lot of modern programming language books are a bit anti-parsing these days. EOPL avoids parsing altogether by using a parser generator, effectively saying that it's a hard problem. PLAI outright calls parsing a "distraction". SICP (not strictly a compiler book, I know) and Lisp in Small Pieces just use the triviality of parsing () languages, which I feel doesn't generalize well.

I emailed the author of PLAI (Shriram Krishnamurthi) about this. His response was effectively that modern books come off anti-parsing as a reaction to old books, which were parser heavy, and tools like YACC -- "Yet Another Compiler Constructor" -- even though it's just a parser generator, not a compiler constructor! He went further to say that, given parsing is roughly trivial in () languages, it sort of seems parsing is only incidentally a compiler/interpreter problem, and users of () languages view non-trivial parsing as signalling a design flaw. I found this to be an interesting take, but in my day job I generally don't have much say in the design of "languages" of semi-structured text that gets thrown my way.

Anyway, I know the Dragon Book covers parsing in some detail but for some reason it's been kind of impenetrable for me -- it feels a bit more abstract than I like. I can follow it, but while reading it I can't help but wonder -- "is this actually going to help me in practice?"

I recently have been reading Niklaus Wirth's stuff though, like the last chapter in his algorithms book and his Compiler Construction book, and those have been absolutely fantastic.

I also asked a question on SE about a particular parser I'm working on -- if anyone has some thoughts I'd love to hear them :)

https://codereview.stackexchange.com/questions/236222/recurs...


Working on a concept of how one would crowdsource a wikipedia-like site with the purpose of gathering information about how technology and tools progressed from the Stone Age to today. Sort of manual for bootstrapping the civilization from ground zero.


I am working on an underwater recording studio. We are building a "3D Telescope" underwater to listen to the ocean. 28 hydrophones mounted on five 5 meter beams connected like a starfish. It will then compress the data and telemeter it home real time.

https://www.whoi.edu/press-room/news-release/whoi-awarded-1-...

Additionally I just won a grant at work to begin designing and building an open source underwater glider. Underwater gliders are one of the best ways of carrying instruments to sample the ocean. They can last 6+ months and be directed to interesting area's. The billion dollar companies that make and sell underwater gliders are focused on oil+gas+military business and are not giving the service, support or product depth the science community needs. They are in dire need of a tech refresh - they fail a lot for an old technology and run DOS. The only way we have a chance of understanding the ocean is to make sampling the ocean more affordable, reliable and accessible.


Does the surface buoy generate power? How is it powered?

I thought of doing something similar but based on a single vertical line, it probably needs to be flat to account for flows and thermal gradients refracting sound.

What cpus does it use? Is there a local backup of the sound field on the buoy? Is the full sound field sent up, or does it have to be "pointed at" something?


After reading recent HN post [1], I’ve started to work on my own implementation of open source fast file transfer client-server application. It’s in a very early stage now so it’s nothing to show yet, but I’ll be very glad to announce it when MVP will be ready. [1] https://news.ycombinator.com/item?id=21898072


I remember reading that post and the surrounding discussion, and thinking it was a really tough (and worthwhile) challenge. Good luck, and I'm excited to see the first version when you release it.


I am building an encrypted file system that runs inside application (https://github.com/zboxfs/zbox). It focus on security, privacy and reliability. The interesting part is it can utilize different storage, such as memory, OS file system, RDBMS and object store. I learned a lot and enjoyed working on this project.


I am working on giving solo entrepreneurs and micro business owners back more of their valuable time by eliminating menial admin tasks. Time is typically your most valuable resource but when you are working alone or on a small team, someone still needs to take care of the admin, eating away at your valuable time.

I’m starting with the sharing of common information with clients and partners. Organizations are often required to supply information on a regular basis to a wide range of clients and partners (bank account details, company, company registration details, tax clearance documents, certifications, charity registration number, etc.). A lot of these documents need to be renewed on an annual basis so there is a constant stream of requests for updated versions.

For bank accounts, the ability to verify a bank account automatically can prevent invoice fraud.

I’m looking at a model where a piece of business information is uploaded to a central platform and then provide permission for others to access it and to receive notifications when a new version is available.

In the first startup school batch for 2020 and working on validating the problem with actual users.


I'm helping build a mathematics tutoring system. Compared to classic math learning, we are trying to mimic the strength of a real tutor. Which is, identifying the lack of math skills and teaching those.

From our experience the biggest issue students have is, they can't solve an issue because they didn't understand a concept they have already "learned" in the past. It's simple, yet powerful.


I would love to learn more about this product.


Sure. Here is our demo page. https://demo.amy.app We usually white label our product and try to work with bigger companies like publishers to get our products into the hands of students.

But, we are still a very small start-up.


Is it on github?


Make security authentication in Government and Public services more secure.

At the Moment, I'm fighting with a monolithic, untouched Java 8 / JavaEE6 service which has lots of old dependencies and that uses old cryptographic ciphers, some of them classified as unsafe (e.g. brainpool512p1).

None knows how to make a reproducible build, since everyone gets a different and working or not-working package and some modules are not even released (using the infamous -SNAPSHOT) in maven and there's no documentation. Unfortunately, there's little testing, so everything can be broken easily and none can know it.

Some developers are also really undisciplined, touching code but not running end-to-end (manual) testing, not even running the installer.

If I had the decision power, I would throw this thing away and start from scratch, probably without Java too or, if Java, at least the latest one and maybe Spring, not JavaEE: Wildfly moves too fast and each release breaks compatibility with the previous one, concerning settings (RedHat: why do you do this??)


I'm trying to write a drop in LLVM codegen replacement, ie something that takes bitcode (Which I already have) and generates x86_64, arm, object files etc. Back story: I've done compilers for most of my professional life but never did the actual native code generation myself, always using .net, java or llvm to do that part.

As a fun project, as I already had code to generate llvm bitcode from .NET, I now do mem2reg (convert stack spots to SSA registers), dead code elimination, constant folding and other small optimizations. That part now works, and I managed to create a simple x86_64 coff object file (with everything needed to link to it, including pdata, xdata) that returns the "max" value for a given integer.

That is about all that works for now, and I don't get to spend much time on it, but the end goal is to have a "good enough" codegenerator for non optimized cases, that could potentially be faster than llvm (to emit). The primary goal is to learn how to do this though :)


I’m building a highly customized, web-based inspection data and quality management system at a medical device/aerospace manufacturer that is essentially replacing a lot of old VB code, with some additional stuff.

Having previously worked at a marketing company and a startup, it’s been fascinating to experience a legacy manufacturing company growing (or trying to grow) into the future.

Yes, the engineering problems are fun and all, but I think the most fascinating part has been thinking about what American manufacturing will look like 5, 10, 20 years down the road.

In my experiences, I believe American manufacturers will NEED to invest in industry 4.0 tech in order to mitigate costs associated with rising wages, shortages of skilled machinist labor, and greater demand from consumers/regulators/OEMs for information and transparency.

I’ve also been quite amazed at how much paper is still used and the lack of industrial software products with quality UX.

And I don’t think American manufacturing will ever cease to exist.


I’m working on Newsy.

https://www.newsy.co

I have quite a few domain names that I have purchased over the years that I am not doing anything with at the moment.

I wanted minimal amount of work to make a good use out of these domains.

So I built Newsy. It turns your idle domain into a news aggregator.

I’m nearly there. You can sign up and I’ll invite you to check it out!


I like this and all the best. I have access to close to 50 idle domains I will try see how I can put them to use. Is there a way to customize the news such that it is relevant to each domain?


Yep! That's the idea! :) Via RSS feeds + keywords.


If by any chance you would like help better testing it, I have so many parked domains that I would like to have them on asap. Is there is any other way I can contact you or you can reach me on nash at hoopsup dot com


Interesting idea though I didn't see any examples of what that looks like on your website or links to live sites doing this.

Hard to think about it just in my mind with no visuals.


Here's an example for one of my idle domains.

https://www.heystartup.com/#/


Nice try with the: "You are currently {random} in the queue."


Did it work? :)


I'm trying to bring light-fields to the masses, as the next level of VR immersive experiences. I'm building a cheap light-field video-camera and the software to process it automatically and reproduce the videos with a VR headset. BTW, not it's not just like a normal VR video because you have parallax.


Wow, I just looked up light-fields. That's a super cool idea. It would be incredible if it were possible to make a 3d light-field camera, so that the person watching it in VR could look all around them.


How do you deal with occlusion? I had the impression that’s the main issue with VR video, unless your recording rig has a massive horizontal offset to capture the sides.


You record the video from a matrix of cameras each at a slightly different pov, then use a light-field algorithm to generate novel images. As you said you only solve occlusion if your cameras are separated enough but in my case I only expect/want the user to twitch his head around. We can add more cameras if this tech starts get adopted.


I am creating a couple of open source tools for data governance. The first one is a data catalog (1) with tags for PII data. The second one is a data lineage application (2). The goal is to keep these as simple as possible to install and use.

IMO the current options are too complicated or expensive and appropriate for the largest companies. I cannot hack a simple application for data discovery or usage statistics. So I am building a dead simple data catalog that I can reuse. The data lineage app is the first app on it.

(1) https://github.com/tokern/piicatcher (2) https://github.com/tokern/lineage


Why I can’t get through a day without anxiety. Many years of research, consulting with experts, running experiments and correlating data. It’s a hard problem.


Have you tried a non-verbal approach? I.e. 'authentic movement', guided dreamlike meditation, ecstatic dance? In a sense reversing the Moravec's Paradox (1) and letting the older, bigger, more complex brain take care of things.

1. https://en.wikipedia.org/wiki/Moravec%27s_paradox


IDE for music composition http://ngrid.io

Launching soon.


I'm guessing this is different from a DAW? Also, cloud-based music making applications is an area I'm generally interested in, and I find it somewhat crazy that a lot of mainstream music software companies haven't stuck a foot in. I'm aware of some, like Cubase having cloud collaboration, but it's mostly blue ocean.


I'm the lead author of Ardour, a cross-platform open source DAW. Just last night, I was helping out a user who was having issues (eventually traced to their AMD graphics stack). Their session wasn't particularly large - about 1 hour of recorded spoken word and some backing music. The whole data set came to 6.5GB ... non-trivial for "cloud-based" anything, even today.

Yes, there are ways to be clever about this stuff, but for "real music making" the typical size of the data involved makes cloud-based collaboration less immediately appealling than you might imagine.


Agreed, data volumes are definitely the biggest hurdle in this scenario. Any of the cloud DAWs I've seen mostly offer basic features, and a handful of recording tracks at best.

Just checked out Ardour, looks great! Being able to work with videos and (I presume) work on additional audio and eventually mix the two audio sources back into the video is a fantastic feature.


What's your tech stack?


What part are you interested in?


My god yes.


An acoustic system for poor-visibility tunnel evacuation assistance using psychoacoustic effects. Massively distributed, self calibrating, microsecond scale synchronized system with a bunch of interesting problems in software, electronics, acoustics and mechanical engineering.


Could you go into detail? Or explain what that means? It sounds interesting, but none of the words seem connected.


We use sound effects to evacuate people from fires in tunnels when they can't see any direction in the smoke. Which involves detecting the fire in the first place, and establishing binaural effects that are directional and suggestive of the direction you should take.

Explaining the implementation would take a wall of text, just listed the aspects that make it interesting.


I'm working on mobile apps to help researchers study dolphin acoustic communication, such as DC Dolphin Communicator 2019 which is free and open source on gitlab: https://play.google.com/store/apps/details?id=sm.app.dc&hl=e...

During a sabbatical trip in the Canadian Arctic in 1975, I came in close proximity with a beluga in Hudson Bay and was impressed with the unusual vocalizations which was in-air and about 3 feet from me. The beluga was tragically killed by Inuit a few minutes later. That's how my interest started. I later learned the basics from two people who were leaders in this field.


I working on a personal project which tries to solve Traffic congestion problems using live feed cameras. Feel free to connect with me on linkedIn if you are interested. https://bit.ly/2RE3omt


I'm building a product that aims to get people hosting software again. The internet used to be bi-directional, in that people could host content as easily as they could consume it. We're currently a Kubernetes hosting platform, but I'm working hard on a system that will allow developers to extend our cloud with their own machines at home, in a data-center, or anywhere! From learning to host Minecraft servers at home, to Fortune 10 Software delivery, we don't see a reason why you should have to jump to different vendors and different platforms. Democratize server-side software! https://kubesail.com (YC S19)


I am trying to build a flood forecasting system for India using satellite remote sensing, hydrologic models, machine learning, crowdsourcing etc.


Nice


Working on https://LightSheets.app, a spreadsheet application allowing high performance data science type tasks like cleaning big chunks of data. I'm also doing a lot of exploratory coding around how far the spreadsheet concept can be pushed to "augment human intelligence", which has led me to read a lot of papers about this area. One thing I'm very interested in is how we can allow machine intelligence to "take the initiative" during the course of human work.

Hopefully in 2020 this means more than simply resurrecting a clone of MS Office's Clippy...


Aggregating virus spread data to visualize for Wikipedia. Hacky but working. https://github.com/globalcitizen/2019-wuhan-coronavirus-data...

Increasing food safety and security plus availability and choice in urban environments using robotics to automate food preparation and software to manage operations and logistics. Hopefully also make money. Differences from web stuff: includes embedded, mobile, electronics design, mechanical design, fabrication, business, cross-border operations, food safety regulations, etc.


Still thinking pretty loosely about the trust space, but a few conclusions on my end: LinkedIn is frustrating because people connect even though they don't know each other very well. I spend a lot of time meeting up with strangers (Craigslist, meetups, dating) and generally hoping that the world is good (though it pretty much always turns out to be). Phone number seems to be something that people only exchange when they have a fairly meaningful interaction - could that be used as a way to show you know someone/vouch for them? I feel like there's something in that space that would resonate with a lot of people.


Hmm, similar thoughts but less about trust and more about the psychology of personal networks.

I'm interested in what value we place on connecting people in our network. Say you meet two persons in your travels that might benefit from being connected. What value do you place on making this connection? How does it make you feel? Does it have to have an obvious personal utility?

These are the types of questions I would love to investigate further. Maybe they're kinder, calmer tools that we can build for this specific purpose.


I’ve been prototyping in this space as well. Let’s all chat. Ping me at my hn username at @doerhub.com


Trying to make it easier to prove your identity online. Essentially by creating an ID for use on the web.

When signing up for services that require real identities (banking, insurance, etc.) the standard currently is to require a picture of a passport, a video of yourself, or copies of some paperwork. These methods are all high-friction and provide dubious security and privacy. This is already a solved problem in some countries and I'm working on the equivalent on a larger scale, without the geographical restrictions.

If there is anybody else here working in this space then feel free to reach out!


With online fingerprinting, big companies seem to have the problem of "identity", bots and spam prevention decently solved, but at the cost of users privacy.

There are many opportunities that could really use a good mechanism to uniquely identify users, know for sure they are real people. This is pretty hard to do (or outright impossible) without the collaboration of governments. But governments will f*ck it up in a number of ways if we leave it to them. We have to think bigger than what some countries have already solved.

First, we need something like credit cards. A physical object (identity cards could work on many countries, but they tend to suffer from beyond horrible usability when it comes to their digital chips / functionality) with a password that can be changed. We would also need a place to see all the "transactions" or actions done with your identity, as we have with credit cards. India and their Aadhaar project has shown that biometric identification is not good. But it sounds very nice and sci-fi, so it sadly sticks. Nothing new yet.

But what we really would need are manageable permissions, so you can always prove that you are an actual human, but not necessarily revealing anything else and being a completely anonymous user, or choosing to reveal some data (country, real name, etc).

If something like this was global and effective, not only we would have many more opportunities through the internet, we could also come much closer to things like direct democracy. Password management and online identity management would also become much easier. Obviously there also are many problems. Starting with the access to the internet itself. And all this identification system does sound very dangerous from a privacy-minded perspective (but the alternatives will end up being much worse, and worse systems will be imposed on us). And I'm still completely ignoring the political will to do something like this.


I think you really hit the nail on the head, thanks for sharing!

Privacy is definitely at the center of what I'm building. My approach is to put full control of their own data in the users hands. Data is only shared with a service when a user explicitly allows it and the user is always aware of what that data is. Your idea of being able to share nothing at all is also something I've been thinking a lot about. Being able to verify an identity without leaking any PII is one of my main goals.

I also agree with you on the issue of governments. There are very few who have managed to introduce a digital ID locally. A large amount of countries coming together and building a common solution seems very far-fetched currently. Where I'm from, Sweden, we have a well functioning digital ID used by everybody. Funny thing is that it was created by the private banks and only later adopted by the government.

It's definitely a hard problem but judging by the evidence it's solvable, just hope I'm on the right path. If you have any more thoughts I'd love to hear them!


This is an almost* solved problem, depending on how low-friction solution you consider solved. Just use X509 PKI.

Have your real world identity issuing agency (whatever that is in a specific country) become a CA, issue personal cert and store it in crypto-hardware. Smart-card interface is rarely present (generally requires specific hardware), USB requires drivers.

Heck, the key can even live inside your SIM card if you prove your identity to mobile provider, which you have to if it is a contract anyway. Works well and is easy to use. Can be an attack vector for scams, though. https://en.wikipedia.org/wiki/Mobile_signature


From a technical aspect I agree that it's definitely solved! I'm building my service on top of PKI for this reason.

The problem lies with the issuing and trust. Many countries do not issue e-IDs and it's not easy for a business to support all the ones that do. Take for example my country, Sweden. Basically every single citizen has an e-ID and it's used for everything. Yet I can't use it if I want to sign up for a non-Swedish service such as Revolut, N26, Transferwise etc.


Would you mind sharing some of the implementations of how this is solved already, i.e. protocols or related specifications which dictate how this is achieved?

This is a great problem and I'd love to know more.


Hi! Sorry about the late reply, your comment was marked as dead before.

I'm from Sweden where basically every single person has an app on their phone which functions as a digital ID, used by the government and a ton of companies. Every time we sign in to (or sign up for) services that handle taxes, banking, insurance, benefits, payments, utilities, etc. we use our mobile ID. There are a few other countries that I know have similar solutions, for example Finland and Estonia but many still don't. You can read more about it here: https://en.wikipedia.org/wiki/Mobile_signature

The big problem is that these solutions are all different and it's close to impossible to integrate them all. This is the reason why many global startups that require identity verification fall back to less convenient and secure methods, e.g. Revolut, Monzo and N26, to name a few. I believe that for truly convenient identity online there needs to be a single digital ID which works for everyone. This is what I'm currently working to achieve.

If you have any thoughts I'd love to discuss it further!


2 projects at the moment:

- a graph-based task manager that incorporates dependencies between tasks and infinitely-nested subtasks - IE maps to how we actually think about tasks being related and broken down. Aiming to get this one shared with the world in early Feb.

- a visual programming environment that represents how we model software in our heads, not how it runs on the computer/s. This is my longer-term, much more experimental project.

Drop me an email (in my bio) if you're interested in either! I'll be commercialising the former quite soon and I'm putting a lot of effort into pleasant to use.


Interesting to me:

https://github.com/ikorb/gcvideo

GCvideo has a way to convert the digital signal on the N64 into composite video, and has VHDL to create an HDMI signal With Audio. So I have been working on finding the digital Audio out on the N64, and converting the whole signal to HDMI.

In not so many words, I am recreating this from scratch: https://www.retrorgb.com/ultrahdmi.html

Mainly because it is impossible to find that.


I'm building a service that fetches the audit logs from all your SaaS tools (think GSuite, Okta, Dropbox, Zendesk, Salesforce, Github, etc) and pushes them into whatever logging pipeline you use.

I built a similar tool internally at my last company and we used it to alert on things like employees making google drive files public to the internet, okta configuration changes, github ssh deploy keys getting added, employees logging in from foreign countries, etc.

If anyone wants to check it out you can reach me at arkadiy{at}logsnitch.com (or just sign up at the same domain).


I'm trying to learn how one writes a rules engine for a digital card game. That is, the system for defining valid moves and combinations and such that isn't just a crap ton of bespoke code.


I'm trying to get people to sleep better by providing them relevant sleep coaching by combing their sleep tracker’s data with CBT-I derived sleep coaching program. Been working on this project for a year now, and it’s finally starting to take the shape I wanted it to have.

Been a really tough journey. I’m was the only coder and designer in the project for the longest time, and my development skills weren’t really that good when I started building this.

Here’s a link to it https://nyxo.app


*tough

Nice start! Is there any fitness trackers you believe outperform in terms of sleep tracking? It's an interesting idea for an app. because I have personally found specialists are booked 3+ months in advance.


Oura seems to be the most accurate at the moment, but only by a small margin, and that is mostly because it's the only one that uses also temperature to measure sleep (in addition to heartbeat and movement).

Detecting when person is asleep has become quite good, but there's still a lot of work to do in also detecting sleep stages. I would not trust any wearables deep sleep readings too much.


I'm trying to figure out the appropriate discount rate and methodology that governments should use when doing cost-benefit analysis of big expenditure projects (e.g. infrastructure). It would seem that economists have been arguing about this for many decades now with no end in sight.

There are some interesting value-judgements that have to be made here (e.g. do we value the consumption of future generations more/less than present consumption?), so I suspect there will never be an objective answer to this question.


Interesting. For my data mining class I took NYC Open data - merging temporal Crime Data and Capital Expenditure data by geographic district/precinct, to see if there was evidence that spending money in a region had a positive impact on crime reduction. Then to see if I could use that model to see if I could predict how much crime would be reduced given a specified funding injection into the region. We assumed there was some lag (e.g. 6 mos. or 1 year) between funded project completion and community impact (reduced crime).

The model was only a little over 70% accurate, but the data was sloppy because we had to get creative mapping precincts to districts since their boundaries don't overlap exactly. However, I think this could be significantly improved since crime data is geo-tagged, so you can get much more specific about where crimes occur with respect to funded projects.

I think that model can be made to help inform where funding could be most beneficial from a social perspective. I realize there are other factors in infrastructure investments than crime - I was focussed on community improvement projects (e.g. blight removal, green space development, school construction/repair, etc.)


What do infrastructure groups underwrite to? Should be a decent starting point. Theres advisory groups that do only infrastructure projects for public private partnerships. Might be a helpful start.


I'm reworking the (already pretty old) concept of literate programming, basing my research on the implementation written originally by Dr Ross Williams in 70's called funnelweb. Given new hardware of modern times all the ugly hacks and compromises of the C implementation (including ugly delimiters, unnecessary terse syntax, not using a recursive descent parser but relying on byte values while parsing text) a new take on the subject might bring it to today's tool chain.


Do you have anything available for public consumption? I'm a fan of funnelweb and would love to see it updated.


Since you asked, and since people who can use those tools are few and far between... I have made my own take on funnelweb syntax. I would like your opinion. Unfortunately not really documented, so please look at sample (complete) input file.

It's currently in working state. https://raw.githubusercontent.com/loa-in-/python3-dreamwork/...

You can see current output here: https://github.com/loa-in-/python3-dreamwork/tree/testout/te...


My current longshot is the development of dropship fulfillment solution as a service. Kind of a managed dropship network for every single dropship vendor and every single dropship retailer out there. The solution should basically take the pain of daily logistics management of the retailers back. Kind of what Amazon does for their own Dropshippers. As a retailer you get full tracking and cost transparency, as a dropship vendor you get a platform that directly integrates with your ERP-system, gives you shipping labels and so forth...

Funny thing is, the basic software more or less exists already. At least on the fulfillment and logistics side of things. The tricky thing now is to create the physical network (also companies like DHL ship for everyone, even next day) and come up with the processes to match n retailers with m dropshippers (some of them shared between retailers) and a basically infinite amount of consumers.

I said long shot. First step is to get my 4PL company of the ground. A 4PL is a nice first step, I tke care of daily logistic operations for clients. Which also includes pretty early on a Dropship component. So once the 4PL is earning some money, the next step will be to define processes for a scaleable Dropship solution, identify software gaps and then create the platform. Talking about longshots...

How I got the idea? I worked for Amazon running, among other things the Amazon.de dropship network. After that I worked for a producer of solar modules. That company sld some of the modules through a webshop and had some dropshipping. Totally inefficient, intransparent and expensive. So I told myself, that can be done better. Took me three years to take the leap into startup world.


Amazon.de has a dropship network?


Yeah, called direct fulfillment. Roughly 400 vendors a couple of years ago for Amazon.de alone. Should be more by now. Quite interesting, so. Not sure why Amazon never really pushed that instead of own fulfillment centers, would have reduced fixed costs by quite a bit IMHO.


sustainability in fashion.

there's a lot of "greenwashing" in the industry driven by opaqueness and lack of measurable data.

step #1 is to get more brands on board. step #2 make it easier to monitor supply-chain and have actionable and measurable KPIs built around data.


This is an interesting challenge, because reducing consumption is the most effective way to increase sustainability. This is antithetical to selling more clothing though, so you won't find much traction with fashion brands.

That being said, Poshmark does have a $600m valuation, so there is a market here.


Patagonia's also done a great job with this. I have no idea if all their advertising around not buying things you don't need has increased the number of items they've sold, or just the price they can charge for them.


this is very interesting! anyway for us to connect directly to learn more? i have deep expertise in the fashion manufacturing industry.


I'd love to! Shoot me an email, paul@mannr.co


I am trying to create a metasearch engine for apparel in India. Apparel shopping is different than commodity shopping and requires lots of browsing before selecting your final product to buy. A user will know about a limited number of vendors and will search for products only on them. A central web/app is needed to give results from the long tail of vendors. This product can be extended to furniture and lifestyle domain too.


By vendors, you mean brands right?


Brands, Niche e-commerce websites, boutiques


As somebody with broad interests, I've long been fascinated by what it means to be a "generalist" and understanding when a wide, varied skillset is an advantage over a hyper-narrow one.

I've been reading about this for years and recently started sending out short summaries of what I've learned (typically geared at how the lessons can by applied practically).

Last week I shared how Nobel laureates are 22 times more likely to have a side hobby as a performer than their peers.

Ultimately, I am trying to land on a succinct answer to "how do you channel broad interests and talents into an impactful career?"

(this is my email: https://stewfortier.com/subscribe)


Whoa, I just signed up for your newsletter a couple weeks ago. Small world. I'm enjoying it so far, keep it up!

I too have really broad interests...I find basically everything interesting, which is both a blessing and a curse, as I'm sure you've experienced. I'd love to talk about this more...do you feel you've come to any kind of answer on how to focus your wide interests?


Excited to have you on the list! Definitely hit reply every once in a while as I'd love to chat more about all of these topics one-on-one.

One theme I'm starting to converge on is the idea of a generalist as an "expert" at a) maintaining a wide range of mental models and skillsets and b) developing a sense of which type of problems to apply each to.

In other words, effective generalists become good at knowing which speciality or approach should be applied to a problem, even if they "only" grasp the basics of any one discipline.

Example:

A software engineer wants to develop deeper friendships. They may think that building an app that reminds them to keep in touch with friends will help.

Of course they think that... software is what they know best.

But a generalist may take a different angle and see that the root cause isn't an automation / information problem, it may be a human psychology issue.

"The real problem is that you don't believe you're worthy of love. If you work on that, you may feel confident enough to want to reach out more."

The next email is going to start outlining the most practical, effective mini-mental models that generalists can use to solve practical problems.


I'll definitely start hitting reply, thank you!

That's a really interesting, but sensible, conclusion to come to. It seems to follow that generalists would make great business/personal coaches, as they're good at pointing people in the right direction. I'd be curious to look at great coaches and see if they had a ton of different interests.

I'm stoked for the next email :)


Assume you have The Range by David Epstein? Basically book advocates diversity in skills over specialisation, personally I am a big buyer of that thesis, one reason I read HN everyday.


Yes! I'm halfway through and have been stunned at some of the less-known research he cites and some of the popular research he debunks (specifically, the study behind the "10,000 rule").

It's also somewhat of a relief to read.

I think intuitively many people feel that range matters, but fear that we'll sound like we lack a "speciality" or even "hard skills" if we proclaim ourselves generalists or broadly curious.


1. Trying to make hiring in tech a better experience by sharing my knowledge and experience with both job seekers and those doing the hiring. The really hard part about this is influencing some change in how hiring is done, because I strongly believe the current hiring process selects for the wrong skillset. I'm publicly committing to write about this topic weekly with a newsletter that I just launched: https://hiringfor.tech

2. At work, I recently completed a really long project with a large team. I'm trying to make the lessons learned accessible to others in the company because they'll also be undertaking similar projects soon. That means documenting my learnings at a level of abstraction that allows others to not make the same mistakes as us, but still have enough flexibility to tailor their implementation based on their team's needs. The hard part is the intersection of technical and people-oriented knowledge dissemination.

This year is going to be focused on a lot of teaching, which I'm excited about.


Is it possible for you to add an Atom feed for #1? I try to reserve email for real communication and use feeds for reading periodicals. I would subscribe to it as a feed, but it doesn't belong in my email.


Absolutely! The feed will be available at https://hiringfor.tech/feed.xml when I start publish content (probably next week).


I am working on Site that helps people, who are in teaching profession, track fees. Often times, small-time tutors are left with using multiple tools like google docs, calendar etc. to track contacts, fees. This is attempt to provide one stop to manage all things.

https://tracfee.com


I'm working on improving the understanding of systems through research, writing and presenting. I write this regular set of articles about unintended consequences coming from tech, politics, and business: https://unintendedconsequenc.es/


I work on things like the datasets you can find here:

https://datasetsearch.research.google.com/search?query=whole...

Teaching machines to diagnose cancer with superhuman sensitivity and specificity makes it easy to sleep at night.


I am working on a solution for people to defeat procrastination. Here's how it works, you select a time slot for work, and we assign you an accountability assistant who will get on a call with you and keep in touch as often as necessary to keep you from procrastinating by holding you accountable for the task at hand.


Focusmate does that. Only worked for like 3 Sessions until novelty was gone


I think the recent (BBC?) article about how procrastination was caused by a lack of emotional management, not time management made a lot of sense to me.


Sounds expensive


Lets think about it and come back to it later.


I am working on a blogging platform that does not need any backend (in terms of an app listening for http connections). The overall architecture is an web app that is talking to WebDAV and then pages get build by a static site generator. I use getpelican.com but you can use Hugo or Jekyll based on your preference.


How would that work? surely something needs to serve the static content to a http request?


You are right of course. There is still nginx taking care of everything. The point is that you need nothing but nginx/caddy/Apache.

Currently, I don't know any way how to initiate execution of scripts over http server so there is a systemd timer checking changes in files and recompiling the whole site. This has lot of downsides. The easiest would be if the static generator reacts on existence of a specific file - recompiles the site and removes the file afterwards.


I'm building a new general purpose RPC mechanism to replace the current HTTP/REST technology, as well as the whole TCP port thing. What service you're talking to on the host will be completely hidden from prying eyes, and unblockable.

You call an endpoint anywhere on the planet and give the name of the service you want, which then gives you access to that service's published API (similar to how you'd use import and gain access to a library's API).

To start, it will operate over port 80/443 to allow seamless integration into the current world infrastructure, but I'm also hoping that in maybe 10 years it could replace HTTPS entirely, possibly even TCP.

The first step is an encoding mechanism that supports the most common data types natively, which I've defined here [1], and am currently writing implementations for in go. It's a parallel text and binary encoding so that we don't waste so much time generating bloated text that's just going to be machine-parsed at the other end, but also allows converting on-demand to a text format that humans can read/write. I ended up developing new encoding schemes for floating point values [2] and dates [3] to use in the binary format.

The next layer above that is a generic streaming protocol [4], which can operate on top of anything from i2c to full-on HTTP(S), and supports encryption. It's designed to be as non-chatty as possible so that for many applications, you simply open the connection and start talking without even waiting for the other side's acknowledgement. It supports bi-directional asynchronous message sending with chunking and optional acknowledgement on a per-message basis, with tuneable, negotiable message header size.

The final layer will be the RPC implementation itself. I want this as a thin layer on top of streamux because many of the projects I have in mind don't need full-on RPC. This part is still only in my head, but if I've designed the lower layers correctly, it should be pretty thin.

[1] https://github.com/kstenerud/concise-encoding

[2] https://github.com/kstenerud/compact-float

[3] https://github.com/kstenerud/compact-time

[4] https://github.com/kstenerud/streamux


Symbolic AI with common sense and explanation built on top of NNs. Here’s a talk I gave last year though I’m no longer with that company and am working on the tech elsewhere.

https://youtu.be/thmkaYOayCM


I'll try and watch your talk by weekend, I'm interested in this too! :-)


Hey! I saw your talk at EE380, mind getting in touch over email?


Sure, send me a note.


I am working on software that makes building web apps faster, easier and more secure.

You host a copy of my web application, and it handles all your user account stuff with modules that add organizations, Stripe Subscriptions and marketplaces powered by Stripe Connect. You write your application with its own web server in whatever language and the two servers form one site.

At the moment I am trying to finish automating my documentation based on the test suites including API details from API tests and screenshots from UI tests.

I am looking for testers if you are building a SaaS or a Connect marketplace.

https://github.com/userdashboard/dashboard

https://userdashboard.github.io


I’m interested in this type of idea and would be willing to help test your flows and give feedback. Email me at my username @ gmail!


Our current computer GUIs are not conducive to productivity. Daily things like too many tabs, distracting notifications multiple windows are “symptoms” of our computers not being able to understand context. Context meaning - what we actually want to accomplish.

Whenever we begin to do something, our computer just sees a bunch of apps and windows. It never tells us how to get better or does things on our behalf. At Amna, we’re working on a natural interface structured around the way people think. We believe it will change the way you interact with computers, and the way computers learn from us.

full problem: https://getamna.com/blog/post/amna-solves-problems/


Our software development company is working on an ongoing issue. In our experience, we find that virtual reality and augmented reality advancements are not happening within our state, New Jersey. Late last year, we set up the Virtual Reality Roadshow. It was our goal to help general consumers and small businesses become more familiar with the benefits of virtual reality technology. We shared our experience and our thoughts on VR in 2020 in a recent post: https://www.invonto.com/insights/virtual-reality-trends-2020...

In 2020, we plan to continue the VR Roadshow and brainstorm new ideas to bring more awareness to virtual reality tech.


Polyphase channelizers. I got interested in software defined radios, which lead me to getting a HackRF one, and that lead to learning how to build SDRs, and that lead to joining a company that was building cutting edge SDRs for really diverse tasks, and that lead me to diving into DSP and the mathematics of radios, which lead me into modern protocols and modulation schemes which lead me to various people doing experimental RF work and started noodling on what an SPU (Spectrum Processing Unit analog to a GPU) might look like and that lead me to Dr. Fred Harris' work on polyphase filters and channelizers and now I'm internalizing all of that so that I can build a device that processes spectrum in new and novel ways.


Prototype for new kind of stirling engine, it's different from others like two-stroke to four-stroke gas engine. That would make solar thermal efficient even for low temperature difference (sub 100°C) or allow for storage of energy by using liquid nitrogen tanks for low-temperature side. I'm currently aquiring better home with garage to develop this idea.

Professionally - change IoT into one big robot, make platform to connect ALL devices with one system, essentially what Bruce Schneier warns us about[0].

[0] https://www.schneier.com/blog/archives/2016/02/the_internet_...


I'm trying to create a program that can procedurally generate regular plane tilings, in a way that allows them to blend into each other over space and/or time. It makes sense in my head, but I think it won't end up working quite as well as I hope.



Thanks! I've seen WFC before and I think it's brilliant. I'm trying to do something with a little less randomness to it. I want to animate transitions between archimedean tilings (https://en.wikipedia.org/wiki/Euclidean_tilings_by_convex_re...). There may be a connection with WFC that I'm missing. My approach is more, uh, direct, I guess.


I started my job search recently and realized its a trade-off between earning a good income and being terrible bored of a company mission. I can't believe in 2020 it's hard to find something awesome and exciting to work on whilst being financially secure. Obviously it's not only about company product but also values, culture & people but I realized the major driving force for making career decision was always intellectual curiosity.

Hence I started a website to curate cool & impact projects that people are building that nobody knows because they are small or unknown (yet). So kind of discover amazing companies / make impact kinda thing. Hoping to launch this month.


I am hoping to get people back into hobbies that they have lost interest in or have fallen out of the loop.

I have build a VERY basic landing page but I am struggling to get time to spend on it.

https://losthobbies.com


Pivoting my parents' homegrown ERP business from the traditional software sales model (one-time sale with annual support contracts) to a SaaS model with MRR to grow and scale the company. This also comes with changing the organization’s mindset and tools used.

I have to say that the technical challenges of bringing in modern web technologies to interface with legacy systems has been an interesting (and frustrating at times!) experience. After working as a software dev for a number of years before taking this on, I’ve been jumping between sales, marketing, devops, management, and actual software development all in a day.


That sounds really cool -- I'd be interested to hear more about some of the hurdles you've encountered integrating modern web tech and old enterprise software. Any particularly notable challenges you'd be willing to share?


It comes down to working on a project that’s been continuously changing over the span of three decades. Over the years my parents have customized their offerings to account for numerous clients (our focus is on wholesale distributors). Keeping track of inventory gets especially complicated when you start dealing with variants such as colors, sizes, perishables, and pre-packaged goods (our customers sell in pallets or individually).

Because of this the systems end up become getting fragmented over time to handle all these different cases and specific needs for customers. Before I joined, the developers have attempted to unify as much as they could, but the business need wasn’t quite there to justify it as much as it should.

Building out our initial SaaS offerings have helped a lot for us since there’s the concept of only having one set of code running on an instance. Because of that, we’re able to abstract away all of the different nuances of each system such that the cloud servers don’t end up in a fragmented state as well, but leaving the on-premise legacy system as it is for now. The plan is for the core ERP to eventually move in that direction, but so far we’re chipping away all the edge add-ons and functions first, such as API integrations with Shopify, Amazon, etc. and building e-commerce storefronts for our customers in React/Node.js.


We're building an implantable device for blind patients that delivers electrical stimulation to the optic nerve via the retina.

The implant helps patients perceive visual information about their surroundings.

Pretty cool tech and fun to work on, too.


I'm building a Python package to get the latest news from most popular news publishers without any external API use.

So, for example, the input is 'nytimes.com' and the output will be the last headlines.

Plan to release it in a few weeks.


Working on XAI for complex computer vision tasks. We’ve built a toolkit which provides the following:

1. Heatmaps based on all popular gradient based explainable AI techniques (plus our own) for classification, regression and semantic segmentation tasks.

2. Uncertainty modeling for classification, segmentation and regression tasks.

3. Concept Discovery/ Pattern Discovery (and dependence) for patterns learned within a deep neural network. (Loosely based on TCAV)

4. Using network internals for optimal pruning and model compression.

Send us an email at sales@untangle.ai if you’re interested in trying out our toolkit. We offer 30 day free trial period.


Building a modern headless commerce platform with the focus of developers, and making it fast, and easy to get started.

It might not be as impressive as some of the comments, but it does seem like something the market is needing.


What does “headless commerce” mean?


Presumably it's the backend side of an ecommerce setup, where you provide an API around which others can implement a UI.


I'm working (at Chequeado.com) on automated fact checking using Machine Learning and NLP. Our first product works in Spanish (it's already been used in Chequeado's newsroom) and we're working towards a Portuguese version for fellow brazilian fact-checkers.

I'm also working as a contractor on automated valuation systems for real estate properties, mainly for the argentinian market. The company have already sold the service to a big international bank to periodically update their mortgages.

And now I'm pondering about starting a research+prototype AI consultancy.


Helping biomedical science efficiently search treatment x biomarker space by replacing the horrifically inefficient clinical trial system with a globally coordinated adaptive Bayesian active learning system.


I'm working on an automation app https://github.com/rmpr/atbswp to make automation accessible to non technical people. I used something like that called tinytask back in time on windows (mostly to play automatically my Asphalt 8 airborne races :p) but when I switched to linux I noticed that nothing like that exist, so this is aimed to address that. Right now the practical use I saw is for automating live demos during conferences for example.


not as interesting or as world-changing as many of the other problems here, but it scratches that 'language itch' for me: I'm building an interop bridge between Elixir and Zig that makes calling Zig from Elixir safe, elegant, easy, and comprehensible:

https://github.com/ityonemo/zigler/

On my plate currently: Figure out how to make a long-running zig function run in an OS thread alongside the BEAM so that it doesn't have to be dirty-scheduled.


A few things, separately:

- how to do digital identity in health and public services for ~15m people

- replacing enterprise/waterfall security risk assessment with collaboration and iteration.

- applying product management methods in the public sector


Working on computer vision & perception system that allows delivery robots to drive autonomously 99% of the time, with only occasional remote human assistance. Making sure robots need less and less assistance, and this is while having hundreds of robots in production doing commercial deliveries.

https://www.instagram.com/starshiprobots

Technology and business do work, so we probably will have thousands of robots within a year, and millions not long after that )


Parsing enumeration and chronology data for serials. E.g. "v.1" is obviously volume:1. But throw in years, parts, editions, copies, supplements, numbers, page ranges, etc, shit gets weird.


Building a VR application that a domain professional could use to uncover insights from high dimensional data. The goal being to prove that doing this in VR beats 2D screen or 3D plot on 2D screen.


I'm curious to hear what your approach is and how this turns out. I've heard offhand (at a university) that 3d UIs had performed more poorly than 2d every time they had tried. I guess this is not necessarily user interfaces though. There may be some academic research on this worth looking up (maybe not though because people don't often post negative results).


Seeing if mental health crises can be predicted by gathering passive data from your phone. ( Accel, gyro, GPS, music choices, keyboard entries, app usage, sleep, facial expressions etc)


i think we’re doing this already arent we? at least foe AdHD and depression. i think its a great use case but the privacy risks are massive.


Yeah the privacy issues are bit scary. Our app can only be installed by people who are in IRB approved studies, but the nature of the data collected means de-identifying it is impossible. There is also the issue of what happens if it turns out we can predict things, it's a bit pre-cog ish. Depression and suicide are such massive problems though that new methods are absolutely needed.


Wait how are we doing this for ADHD?


We are working in my company on trying to combine and solve route-optimization problem with scheduling and transportation problem for the Electric Vehicle drivers https://www.makemydayapp.com Think of an EV driver. Where and when should we charge the car ? why not to charge the vehicle according to your schedule and go to your meeting while your car is charging ? and of course. Pre booking your charging station.


Working on building a system that can be used in urban areas to help fight climate change and water treatment issues using Algae. There has been so much research on the uses of Algae and how effective it is at using CO2 to grow and now I'm just trying to think of the most effective way to launch a venture.

The hardest part has been deciding what to fight first and meeting other people who have experience working with algae. I would love to connect with anyone that wants to talk Algae!


Currently at YC's startup school, trying to solve the unemployment and underemployment problem autistic and Aspergers people experience. Currently validating assumptions and it looks like a freelancing platform for autistic people is the most promising direction. Started accepting application from potential autistic people: https://www.spectroomz.com/work-from-home


I simulate water distribution networks.

I create computer models of water networks and calibrate them so utilities can do what-if and growth scenario planning. (e.g. what happens if this pipe burst? how would the network cope with 20k new houses in 40+ years)

I'm also developing software to help water engineers build and run models, some of it opensource and some of it commercial.

I'm currently pushing most of my effort into an opensource javascript library to simulate water networks.


On a OSS sideproject, I'm working on a DI container for TypeScript that can autowire interfaces, Array of types and generics.

Since the type information is erased at compile time, it uses the compiler API to extract the data needed and generates TS code for the interface and constructor mappings.

The library is on GH, but not really much to show. I've posted on /r/node and it got some positive reactions, but it didn't got that much attention.


My 'paid job' is boring - Identity Management for Blue Chip companies.

My fun stuff at the moment:

1. Learning Windows IoT on a Raspberry Pi 3B

2. Working on proof-of-concept Search Engine Indexers for specific datasets and/or local file-systems (on network servers).

3. Exploring a new paradigm of allowing people to easily publish train-of-thought type content without having to post a long series of tweets or silo it inside Facebook/LinkedIn/Gist etc.



Creating a directory of all WhatsApp-using businesses in the world. In dog-fooding I must say it's pretty nice booking a dentist without calling and waiting on hold.

Business listings are fairly sparse in some countries. Many owners do not bother creating even a google maps profile and just rely on word-of-mouth for new clients. Acquiring the bottom of the data-iceberg will require some creativity going forward.


Doesnt WhatsApp provide a eta for businesses to publish themselves on a public directory? I am guessing not given that your looking at this but why wouldn't they...


I'm researching ways to scale deliberation and qualtiative decision making in the number of participants. I think this is the base for making politics work and tackling big wicked problems like climate change. It's slow, since my time is limited. But talking with lots of interesting people about ideas and approaches is promising. If you're interested to talk, please contact me.


I'm extremely interested, but I don't see your contact info in your profile. My email's in mine, if you want to get in touch.


Ouch, thanks for telling me. I always thought having filled the email field was enough for others to see it. Fixed now.

Anyways, just sent you a mail.


I am working on exposing the file system to the web browser using OS like GUI controls and then selectively sharing parts of it to specified users with security controls. It’s kind of like turning your computer into a private cross-OS shared drive.

The problem this solves is sharing. Sharing between devices/users should be as simple as copy/paste initiated by either user like everything is local.


I've been learning how to design game play mechanics for the Ethereum EVM by building a game dapp with features I haven't seen before.


I am building a Scratch programming course for kids.

But its more than just that. I want to take the material and make it more entertaining as well as educational.

I am observing that more and more kids are learning things on their own by just going online and searching for videos on how to do X. We are on the cusp of online learning overtaking traditional in classroom learning in terms of quality and presentation.


I recently took a few days apart to implement my own SAT solver. The idea is to describe the solution space of each clause by a dnf. Then intersect the solution spaces (dnf1 and dnf2).to_dnf() in a way such that the intermediate solution space representations are small. In the end it is solving ALLSAT, by converting cnf to dnf.

I'm happy that it works well for many small sized problems.


Automation of work. I believe as the number of daily applications we use increases and the number of available APIs increase, the need for automation across these applications increases.

https://bustl-app.com - A SaaS product that acts as a personal assistant that will integrate with a range of different apps.


Would this be something like Zapier?


I'm working on strong AI using natural selection and reinforcement learning to develop an intelligence not necessarily modeled to ours.

What I can't figure out is what to use as inputs, similarly to the human senses, so that it doesn't become too specific, i.e. weak, but instead remains general and able to understand the binary language computers use.


I'm trying to improve people's abilities to predict the future.

Predictions are a critical part of decision-making, and it's possible to improve – see, for example, Philip Tetlock's work. But that requires the right tools, which we are building: https://www.empiricast.com


I'm re-implementing Chrome extensions for Electron. The existing implementations are specifically for dev tools extensions, and/or so aren't working for the extensions we want in our Electron app. I'm finding working through the various inter-process communication models (and resulting implementations) really interesting.


I’m building a simple, opinionated tool to create and manage roadmaps the right way (IE not features on a Gantt chart). Think of the opinions that the basecamp guys bake into their products, but targeted at roadmapping. Going to use it with my own internal product team for a while, but could easily see it being a good standalone product.


Working on a new company, one which (I hope) will turn the restaurant industry on its head. I want to make in-restaurant ordering, payment, and service pure joy.

I'm mostly heads down coding every day, building an MVP. Also trying to find some investor interest where possible, however fundraising has never been something I'm good at.


This reminds me of an experience I had in the UK that was not pure joy. I paid using their website, and as soon as I saw the payment confirmation appear, closed the web browser. Well, whatever tech they were using must have sent the internal “they paid” message on a hit from the confirmation page. So, the money came out of my account but their systems didn’t track it. It took quite a few weeks of back and forth to get my double payment back.


This seems like something everybody wants but I’m surprised it hasn’t happened yet. What are some of the (assumingly non-technical) challenges in such an enterprise?


The biggest problem I'm facing right now is credit card processing costs. The restaurant business is a high revenue, low margin industry. The fees are higher for online transactions (no card present) vs. in person transactions (card present), on the order of 1-2%. I think this is the biggest reason it hasn't happened yet. However, I have a workaround which I think will work.


Shouldn't an in-restaurant ordering solution be processing card-present transactions as opposed to card-not-present?


I was wondering that. Why would you to to a restaurant without an ability to pay?


If they're in the restaurant, why isn't the card present?


Oh no. Not another crypto currency or other virtual currency I hope.


Nope.


Trying to get companies to give me a chance without a 4-year degree. Even my own employer won't promote without a 4-year degree.

While this sounds like a complainy pants problem, this isa very real problem for a very large percentage of the United States. Without a 4-year degree as a de facto dues card, you are severely limited on your options.

At 34 years old, I could maybe have a degree by 40 while working full time and have to take on 30-60k dollars of debt to be competing for entry level jobs against 20-22 year old applicants (many schools now have programs so students can graduate simultaneously with a high school diploma and an associates degree). At my current income, if a degree could get me an extra 15% within a year of graduation, I would be in my 50s before I paid the loans off at current rates. That means I sacrifice the last half of my 30's to break even in my 50's and maybe make some extra money in my 50's and 60's losing out on 20-30 years of compounding interest because I don't have that arbitrary degree in anything as a dues card to say I'm worth hiring/promoting.

Blah.

Last year I made about 10% less than the year before because of zero overtime, our annual merit-based increases often are break even (sometimes not even break even) once you factor in inflation and insurance cost increases, throw in the constant nagging pressure of cancer risks (father died of it, mother had it, father's mother died of it), climate change, international trade issues which could see me laid off, automation possibly replacing jobs in the near future, it can often be quite crushing. Especially when you're trying to maintain sobriety and just want to run off into the woods with a cask of high proof alcohol and try and befriend a bigfoot to help provide food and shelter for you so you can die from Lyme disease or exposure living as a refugee in Bigfootville.

Meanwhile you see people with YouTube channels buying what equate to mansions (What's Inside, Jenna Marbles) and taking international trips monthly (What's Inside, Casey Neistat used to, on aircraft with seats in the tens of thousands of dollars a flight) and even crazy domestic trips frequently (What's Inside) and you're like, "Dude, I just want to make more than 34k a year".

I truly can't imagine what it is like for people that are consigned to working fast food/retail/service jobs as their sole source of income. It has to be all but crippling.


As a co-founder, we’re building a scalable AI deployment system for banks. On the outset, not as sexy, but our system is meant to highlight deficiencies and problems with AI like bias, fairness, etc so hopefully people are aware and will have the impetus to fix things. Looking to impact change from within.


I’m working on making email SaaS providers behave reliably as a side project. I’ve written a piece of software which is self-hosted that does failover, retries, queuing, logging, and monitoring for sending mail via SMTP so that people don’t have to spend time implementing all of that plumbing themselves.


We are working on how to match engineers with engineering teams based on the work environment and team values/culture.

I've had too many friends and family members end up at companies that were not a match and watched the massive stress pile up. I want to help people find the right team/culture for them.


Sounds interesting, I always like to see good ideas in the hiring space.

How do you differ from something like https://www.keyvalues.com/


Thanks! I love what Lynne is doing!

We differ in a lot of ways, but the biggest is that we are trying to profile the actual eng team and how they work / what they value. And, then match you with teams that are a good fit + high satisfaction in key areas.

For example, say you are motivated by big tech challenges, we would match you with teams that are motivated by similar and report satisfaction in that area from their current eng team.


There is plenty of data lost in Google Search Console due to the limit of max 1000 records for any time period. The real value is in the long tails and they are lost.

I am working on creating a solution that gathers the data normally not seen in console dashboard and discovering actionables that help the user.


Right now I'm working on electronic music generation. That is, how neural nets and other technologies can be used to generate electronic music. It works roughly similar to text generation using Markov models, but there are a lot of problems not found in text that are specific to music.


Procedural fictional world generation.


Explain? Can you make it for vr?


I'm not quite ready to reveal it here just yet, but no, it's not meant for VR.


i am working on a similar project to lotrproject.com or a newer example would be www.witchernetflix.com but my project would be for any book (or universe) in general. (e.g. my fav. book series malazan book of the fallen). You could describe it as wiki with fancy UI i guess


Data-driven robot control methods for solving furniture assembly.

It's an interesting problem, requiring both dexterous manipulation and long-term planning. It's also compositional, so I believe some form of hierarchical control and planning can solve it.

www.clvrai.com/furniture


1. A tiny script for getting top N posts of past week from a subreddit into a telegram channel. This can be extended in so many ways and I couldn't find an existing solution.

2. An all-encompassing personal knowledge management solution that is effortless and universal.


Symbolic math for students. More generally, UI principles for tree manipulation.

Why am I still using pen and paper for math homework? Why do I have to rewrite the whole friggin thing every step?

And there's a hope that whatever I learn might be useful for lisps, too.


Working on a free lease trading website. Tried SwapALease/LeaseTrader but the fees were WAY too expensive for just posting. Going with a freemium model with posting being free and additional ID/verification checks charging money.


Making people connect offline


I've often thought about trying to use HN to facilitate more of this.


Bringing an online community to offline is very challenging. HN has many members, but spread out around the globe. An interesting challenge. There have been some initiatives in the past, I think? Not by HN but by members I mean.


I choose something that probably is not solve-able, the shortlist:

1. 4-manifold problem, while you can see it should be the surface volume of the shape is equal the math proof is the impossible rub. 2 Prime number generator


I am building movie-macther for alerting user for their IMDB's watchlist.

https://github.com/navyad/moviematch


Models to detect strokes in medical images to be deployed in a hospital.


Could you add thermal imaging support to pick out folks with a fever in a crowd?


What specifically are you looking at? DWI MRI? CT Perfusion? AI models?


I am trying to translates Andrew Yang's policy site into Spanish.


I use statistics and machine learning to study the physiology of human pain and stress.

This can lead to ai-suggested interventions that people can apply to themselves or to support someone else.


I'm trying to challenge the current model of hiring which heavily relies on CV's and frankly awful job ads.

It's a complex problem, but there must be a better way of doing things!


How to write a compelling story in the form of a book.

Sounds easy...

But it's the most difficult thing I've ever tackled. Even considering I've read books like water since I was a kid.


Community medicine; precision medicine - all self-funded.


Sounds interesting. Can you explain more?


Nothing at the moment unfortunately. I've started creating a web app for news/social/groups/dating but it's not that far along yet.


app to help me track my due payments (always forget things like monthly fee for piano lessons or similar not automated payments, plus helps get an overview of upcoming expenses).

released recently as my first app in google play

https://play.google.com/store/apps/details?id=com.due.core&h...


Im working on https://carboncredit.io

It is a next generation carbon offset marketplace.


I fail to understand what your product offering is by looking at your site. Could explain what exactly is trying to be achieved by this product and why I would even want a carbon credit cert?


You would want a carbon credit certificate to save this planet basically.


I don't think clicking a green run button will save the planet. I don't understand either. Your site doesn't seem to have any actual information about the carbon offsets or who does them or where or how it relates to the API. What do you actually do?


for the record: I'm taking screenshot of these comments :-)


Im trying to suggest different Docker properties for each service in a multiservice system in order to maximize the performance of the system


I am currently writing a future in C++. We were using the future of the stlab library but that turns out not to work 100% reliably.


Multiplayer networking for a turn-based role playing game. Lots of difficulties I didn't foresee at all beforehand.


Could you list some of them?


Some difficulties are because I'm doing it in the browser. Things like connection/reconnection logic and how to handle duplicate clients and authentication and the decisions surrounding that because my game isn't a 'drop-n-go' game like Slither.io.


I'm working on a HTML 5 renderer for the most complex subtitle format in the world (that is actually in use).


A truly anonymous anti-social network and an anonymous but verifiable identity to go along with it.


I'm working on automating the acquisition of cardiac MRIs using deep learning!


Where does deep learning come in?


We use heatmap localization to identify the cardiac landmarks that define the viewing planes.


Is there are way to get involved in something like that? It sounds so fascinating.


Trying to figure out why learning how to cook is so tedious in a software world.


I'd like to know of a good book for the science of cooking. I hate blindly following recipes.

Like a poor man's version of this book by Microsoft CTO Nathan Myhrvold.

https://www.amazon.com/Modernist-Cuisine-Art-Science-Cooking...



Lol, why do you think learning how to cook is tedious? There are applications that help solve this problem. e.g FitMenCook, Allrecipes etc


I'm working on aggregator editing software for the site I recently launched. This is a LAMP stack. If someone wants a source code, please contact me via email.

https://www.trumpsdaily.com/


Baitblock (https://baitblock.app) here:

We help you avoid distractions while working on websites and the internet by installing our Chrome extension. For example, Baitblock removes recommended videos on YouTube while you're working.

It also deals with 1st party cookie tracking. It clears cookie/storage on every page load as long as it detects that you're not logged in to the website (upcoming version removes many bugs) using machine learning (NLP).

Since there are too many cookie/gdpr popups now a days, Baitblock automatically hides them while you're working.

You can also add summaries/TL;DR for any link on a website (right click) so others dont have to click.

The end goal of Baitblock is to block all possible distractions in a webpage and save everyone's time.

The latest version of Baitblock 0.1.0 is awaiting approval with many fixes and new features.


Leader election in a distributed system of unknown size.


Trying to get hired in infosec with 10+ years of web dev experience, and failing miserably even though there is a "desperate need" for more people from all backgrounds to enter the field :)


Helping car salespeople sell more vehicles.

Lots of webscraping.


I'm helping dealerships better communicate with customers during service. The automotive world is a wacky one.


That seems a bit more ‘wholesome’ than me.

Maybe we can compare notes?


Do you need to deal a lot with captchas?


No.


Bringing up two kids.


I’ve discovered a new class of surfboard fins, they’re also the first ever designs that can be simply 3D printed, sanded and surfed. This means that a person in a developing world surf town could affordably obtain a 3D printer and begin producing $50-$100 of fins per day; costing 1/10th that price in filament. My site: https://techfins.surf (still a bit to go in completing it)

My Open Collective page (I’m ballin’ on a slim budget here) https://opencollective.com/techfins

You can see more of my fins effort on my instagram page, @stormfins

I recently decided to use the techfins name instead.

I ended up working on this thanks to a passion for surfing, and knowledge that new airfoils could radically improve my surfing ability by augmenting my surfboards’ capabilities. I learned CAD four years ago just to do this - make fins based on new airfoil templates. This ‘new class’ is essentially high lift fins. Compared to the current surfboard fin standard (6mm thick fins), my 16mm thick fin designs provide radically more drive, traction and stability to surfing at the lower speeds. Making normal to pumping surf more accessible and enjoyable to novices and experts.

These fins are not only empowering for surfing ability, they’re also safer because of their thicker, more rounded edges, and when 3D printed the fact they break before your skin does. Also, if these fins could be fitted with an internal floatation during printing, they could be recovered and glued back into place using automotive plastic glue.

These are also literally the first high performance Wavestorm fins to be created as well. Anything else out there is a boring, simple fin design.

Some past feats I can be proud of that you all may appreciate:

• created BeelineReader.com’s first working app, helping them get off the ground. Helps you read much faster using a novel, internationally patented innovation.

• Created a web browser with T9Space.com that empowered 10s of thousands of Nokia phone users around the world to access the desktop only internet back in ‘07-‘10

Earned a bachelor’s in CE at UCSC 20 years ago.

My moniker thirdsurf is about a P2P ‘school of surfing’ project I want to get going next. There needs to be a more dynamic connection between those with the knowledge and those who’d like to learn.

If this fins thing gets off the ground, it’ll open up other possibilities. Nucleos.com is an example of a company that’s been developing a cheap school software server that can operate in developing world conditions. I want to see 3D printing fin labs sprouting that use a computer that can serve out edu apps to anyone nearby who has a wifi device.

I also aim to set up a P2P market for people who could fabricate the digitial fin designs I’ll be making available. This could open up a market of innovation, empowering people to tout novel materials and fabrication methods, helping advance greener, safer and more economical ways to make these fins, and other goods.

Also, in places where it may be difficult to obtain filament, the machines https://preciousplastic.com/ and others are developing could turn plastic refuse in devloping world areas into a precious commodity.


I recently launched Homematchx.com as an interactive way to connect future home and buyers at similar stages of the real estate process. Unlike other real estate listing platforms that shows move-in ready homes in less than 30 days, Homematchx list future homes and buyers at various stages of the real estate process to empower users to find the right match to close when they’re ready. Everyone has a plan and a price! Why wait until you're 30 days from buying or selling when you can find the perfect match to extend the closing date at your convenience.

According to bankrate.com over 60 percent of millennial home buyer regret their home purchase. SHOCKING! In our industry this stat is overlooked and it has made me wonder why. I was working with someone who's lease was coming up and wanted to explore buying a home. They had two choices, rush into a home or wait another year. They decided to move and lease in the area they wanted to purchase. Being unfamiliar of the housing market after signing a 12-month lease there is no way to identify future homes that will come to the market when their lease is up. This story led us to a problem in our market. To avoid the traditional move-in ready market in less than 30 days, our users can match to homes that will be available for sale at their expected time to purchase. This is a perfect way to build confidence and prepare for the journey head.

Let's keep it 100 percent! We don't have access to who is planning for the future which could produce a better outcome. As a real estate professional over 10 years I’ve noticed soon after buyers move into their new home better homes in the neighborhood were listed for sale around the same price. We often times think we purchased the best home at the right time, but everyone has a plan and a price,That’s real estate right!

Would you be willing to wait if you found the home worth waiting for or made the seller an offer they can’t refuse for your one of a kind? Homematchx assures you never miss out on wishing you could have purchased the home next door, the home across the street, or the home around the corner. Our platform allow consumers to see available homes for sale up to three years out giving you access to more inventory than today.

I think seller are at a serious disadvantage when they need to sell. Days on Market is a huge issue and its a growing concern in real estate. They are unsure how many buyers fit their home's description and would be willing to purchase it at their desired time. To time selling your home perfectly is unpredictable. Our platform allows you to see all the buyers, their compatibility to your home, and if they have been qualified or not. Never will a seller list a home for sale without know who will actually purchase it.

We are heading into the new construction industry to help home builders better understand the real estate market and who's available. There is so many missing things that buyers don't have access to in order to time their new construction journey perfectly.

I'm excited about the many problems we can solve but I know we cannot be successful without the users knowing it exist. I'm on a Godly mission to finally change the real estate market and make it accessible regardless of your timeline.

Stephen L.


I've been working on a video editor (https://phot-awe.com), for more than 1.5 years.

Biggest challenge has been speed: first proof of concept was a prototype that was kinda' slow (C#/WPF/Windows). I've re-written it using the lowest level possible stuf from WPF, and that took me a looot (roughly 3-4 months, to also make it easy to extend/modify). That was an improvement of roughtly 3-4 times, but for non-trivial stuff, it was too slow (and especially saving the final video was insaaaanely slow). So, I did another rewrite in UWP, and this took another 4+ months.

Now, I'm really happy about the speed - it's 3-4 times faster than before, and at saving, it's 10-12 times faster.

In order to make it happen, I've worked insane hours (and still am) - but that's that. Right now (the following 2 months) I'm focusing on stability and some improvements. Hope to have apretty cool new feature ready in roughly 3-4 months, and we'll see.

Challenges: countless, probably I could write a book ;)

1. Parsing existing videos - in WPF that was insanely hard, and it took me a lot of time to come up with a viable solution (which when porting to UWP, I ended up throwing away)

2. Estimates - I was pretty good at estimating how long a task would take. But due to the fact that everything was new to me (basically, animating using low level APIs was close to undocumented), so pretty much everything took 4-5 times more than I expected. This was soooo exhausting and depressing, since at some point I just stopped estimating, because I knew it would take me longer.

3. Changing the UI due to user feedback - basically, I ended up redesigning 80% of the UI to make it easier to use. What I thought would take me 1 week, ended up taking me 1+ months.

4. Tackleing everything at once: trying to implement a new feature, while dealing with bugs people would find or dealing with issues that would come up when trying to implement the feature. And dealing with issues that came up from the photographers I collaborate with (those that create the app's effects/transitions).

5. Porting to a new technology (UWP/WinRT). This is something that I hope I never have to do again - I was forced to do it, because of the speed gains. I had to reimplement / retest every control I initially developed - that's one thing. The other one is dealing with the idiocracy of WinRT - which loves async stuff / and also loves limitations. Also, the UWP documentation is soooo bad compared to WPF - and there are very few resources, because most people are put off by it (not going to go into detail as to why, that's another book I could write). Not for the faint of hearted. 6. Compilation times - on the old technology (WPF), everything was insanely awesome. On UWP, compilation times are roughly 6 times slower. That is baaaaaaaaad. I'm doing all sorts of workarounds to make things faster.


I'm working on a versioned, temporal DBMS[1] called SirixDB in my spare time, which is the most exciting thing :-)

It's based on a university project on which I was working basically since day one in 2006.

I know it's crazy to work on such a large project initially alone. Lately, however, I'm getting the first contributions, and maybe I should start collaborating with the university or with the company of my former supervisor (who began the project for his Ph.D.).

I'm now more than convinced that the ideas are worth to work on, especially in the advent of modern hardware as byte-addressable NVM :-)

Currently, I'm working on the storage engine itself, to reduce storage space consumption further and to make the system stable. I'm experimenting with larger data sets to import (JSON and XML currently up to 5GB) with and without auto-commits, enabling/disabling different features, for instance, storing a rolling merkle hash for each node, storing the number of descendants, a path summary and so on.

Some of the features:

    - the storage engine is written from scratch
    - completely isolated read-only transactions and one read/write transaction concurrently with a single lock to guard the writer. Readers will never be blocked by the single read/write transaction and execute without any latches/locks.
    - variable-sized pages
    - lightweight buffer management with a "kind of" pointer swizzling
    - dropping the need for a write-ahead log due to atomic switching of an UberPage
    - rolling merkle hash tree of all nodes built during updates optionally
    - ID-based diff-algorithm to determine differences between revisions taking the (secure) hashes optionally into account
    - non-blocking REST-API, which also takes the hashes into account to throw an error if a subtree has been modified in the meantime concurrently during updates
    - versioning through a huge persistent and durable, variable-sized page tree using copy-on-write
    - storing delta page-fragments using a patented sliding snapshot algorithm
    - using a special trie, which is especially good for storing records sith numerical dense, monotonically increasing 64 Bit integer IDs. We make heavy use of bit shifting to calculate the path to fetch a record
    - time or modification counter-based auto-commit
    - versioned, user-defined secondary index structures
    - a versioned path summary
    - indexing every revision, such that a timestamp is only stored once in a RevisionRootPage. The resources stored in SirixDB are based on a huge, persistent (functional) and durable tree 
    - sophisticated time travel queries
Besides the storage engine challenges, the project has so many possibilities for further research and work:

    - How to shard databases
    - Query compiler rewrite rules and cost-based optimization
    - A brand new front-end
    - Other secondary index-structures besides AVL trees stored in data nodes
    - Storing graphs and other data types
    - How to best make use of modern hardware as byte-addressable NVM
[1] https://sirix.io or https://github.com/sirixdb/sirix


My lack of sleep.


Not sure it’s interesting. But it did become a problem :)

Anyway...

This summer & fall I wrote a JS core lib and a set of compatible packages that together greatly simplify the creation of terminal based node apps and games (in the realm of blessed, blessed-contrib and ink, but with no dependencies and with a novel api/architecture)

I got into it because my son did this node project where an animated car drove in a forest of cellular automata generated trees. Yah. You read it right. Things spiraled from there...

It is not a small project and it is pretty close to release form. I’ve used the lib and components to write a couple of small but non-trivial things. So, yes, it works.

In December, though, I stopped actively working on it. There are various reasons. One of which is that there is snow on the mountains. There are other reasons, none of which is code related.

More curious? More question. Cheers.


Raising a child.


How did you end up working on that?


tag this NSFW


Surely that would be 'making a child'?


Kudos to you -- that's a problem I'm definitely not prepared to work on yet!


The past several years I've been trying to find some tangible philosophical ground to stand on. This (desperate) search is and has been the produce of mental illness I've dealt with since my adolescence (I'm 32 now).

Quite a long story short, I managed to get the mental illness under control; something I thought I'd be living with the rest of my life.

My research has included mostly standing/walking meditation, and reading a lot on philosophy, religion, psychology, and such.

This is a personal project I've only just sort of revealed, after some persuasion by my peers. I didn't really have much intention on putting out in the public but it has turned out to be something significant. There is a lot to say about it.

EDIT: If you're curious, here is what I came up with after I started recording my research. DISCLAIMER: there is some personal stuff I talk about.

https://github.com/myles-moylan/head_project


I would suggest that you read "He Is There And He Is Not Silent" by Francis Schaeffer. He tackles the three big philosophical questions (metaphysics, morals, and epistemology) from a Christian perspective. His writing is simple but deep - I had to read it several times to get parts of it.

He's doing an overview of huge fields, not an in-depth, point-by-point argument. You sometimes have to fill in details of his argument yourself, because he doesn't answer every possible counter-argument.


Thanks for the suggestion, I will read it. :)


Could you please elaborate on meditation? How has it helped you, how much time do you spend, what is walking meditation etc?

Thank you


When I meditate all I'm trying to do is keep my attention in the present moment; if my mind starts to wander I bring it back, and so on.

I've always loved going on walks, and they've just turned out to be a good way to work through mental/emotional stuff. Walking helps you stay in the moment as well, and when you're focusing your attention on the moment there's always new things to sense.

Let me know if that's not clear enough.

EDIT:

Another important part of meditation is the analysis of your thoughts and emotions. Whatever comes up, try to understand why it did as well as you can.


I go on long walks, this is something I will try. I usually listen to podcasts while walking, maybe it is time to try something different.

About analyzing thoughts - I tend to over analyze everything. How do you not fall into this trap?


It's vital that you're honest with yourself in order to find some sort of base cause to any given thought/emotion. I honestly end up catching myself lost in thought after a minute or so more often than not.

It's really a best effort sort of thing. But the more you practice, and the more effort you put forward, the stronger you'll get and the easier it will be to filter the signal from the noise.


> walking meditation

Walking while meditating. There's a common misconception that you need to be sitting or lying down to meditate *or even need to be in a dark candle lite room lol). Meditating is a state of mind.


Distance running gave me the space to let stressors bubble up, acknowledge them and move on. Once all of those thoughts pass it becomes easy to focus on breathing and be present in the moment. Over time it also becomes easier to recognize this pattern and expect and accept it.


Yup, this has been the case for me too, along with any repetitive task such as rowing, turning a garden, chopping wood, etc.


Some of my best ideas come whilst doing dog poop sweeps out of my yard. ... I know


How to find an accurate numerical approximation to e, the base of the natural logarithm? Last weekend stumbled onto a shockingly easy and effective way!

Google query

"base of the natural logarithm e"

reports

e = 2.718281828459

that is, 13 digits.

The calculator with Windows 10 reports

e = 2.7182818284590452353602874713527

that is 32 decimal digits.

Last weekend found

e = 2.71828182845904523536028747135266250

that is, 36 decimal digits.

The math and code are below and could just as easily get e to, say, 500 decimal digits!

How'd that happen?

Last weekend worked on some short but relatively careful notes to get a nephew of 9 started on calculus, and part of that was Taylor series in just two pages with large fonts!

The code and the core of the Taylor series derivation are below.

In TeX, Taylor series is

f(x) = \sum_{i=0}^n {(x - x_0)^i \over i!} f^{[i]}(x_0) + R_n(x_0)

with R_n(x_0) as the error term.

To derive the Taylor series, really just find the error term

R_n(x_0)

and for that just differentiate f(x) with respect to x_0 where then nearly all the terms cancel, simplify, integrate from x_0 to x, and apply the mean value theorem. That's all there is to it!

The results are, for some s between x_0 and x:

R_n(x_0) = (x - x_0) {(x-s)^n \over n!} f^{[n+1]}(s)

As above, the final output of the code:

e = 2.71828182845904523536028747135266250

From R_n(x_0) the error is less than

3 x 10^(-40)

The numerical output of the code is curious: Get a little over 1 decimal digit of accuracy for each term of the series! So the output shows two big triangles, one for the values of n! and one for the number of correct digits in the estimate of e.

A key to why this code is so simple and works so well, Kexx can do arithmetic with 1000 decimal digits of precision!

"Look, Ma, here's the code -- dirt simple":

          macro_name = 'NATLOG'

          out_file = macro_name || '.out'

          'nomsg erase' out_file

          Call msgg macro_name':  Find natual logarithm base e'

          numeric digits 1000

          n = 35

          sum = 1

          factorial = 1

          Do i = 1 To n
            factorial = i * factorial
            sum = sum + 1/factorial
            Call msgg Format(i, 5) Format(factorial, 50) Format(sum, 2, 35)
          End

          error = 3 / factorial

          Call msgg macro_name':  The error is <='

          Call msgg Format( error, 59, 50 )

          Call Lineout out_file

          Return
     msgg:
          Procedure expose out_file

          Call Lineout out_file, arg(1)

          Return


That's Calculus 1.


Yes, to quote, as I wrote,

> notes to get a nephew of 9

I'm writing the notes for my 9 year old niece. So, the notes are for a boy of 9. And it's calculus, for a boy of 9. Of course it's "calculus 1".

I thought I mentioned it was for a boy of 9.

So, I omitted (1) a real valued function of a real variable with a compact domain has a Riemann integral if and only it is continuous on a set of measure zero, (2) for such function, if it is differentiable, then it has no jump discontinuities, (3) there is such a function that is differentiable but whose Riemann integral does not exist.

But, still, it is amazing how easy it is to get 36 decimal digits of e, and 500 if want.

I've been through calculus, advanced calculus, advanced calculus for applications, differential equations, local series solutions to the Navier-Stokes equations, exterior algebra, real analysis, measure theory, and more, have taught calculus in college, applied it in US national security and business, and published in it, and still I'd never seen a clear treatment of how easy it is to get so many digits of e from Taylor series.

The results I found are amazing, and my good and long experience indicates that only a tiny fraction of calculus students appreciate that.

And as we know,

e = lim_{i \rightarrow \infty} (1 + 1/i)^i

and it turns out that that iteration is painfully slow, and from my experience this fact is also amazing and poorly known.

It was amazing stuff and a productive weekend.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: