The 30 years out from this point is interesting/tricky because at some stage during that period computer AI will become probably become smarter than humans which will change a lot of things. It's not really possible to bring super-intelligent robots back to the now by spending money because Google etc are already spending a lot of money on AI research so we'll just have to wait a bit.
Still there a lots of ways it could play out - heaven like or hell like so maybe it's good to think about it and try stuff.
But is Google spending money on the fruitful ideas? Pick something you care about that is good for people and see what happens when you take it out and then bring it back.
This may sound odd: but haven't you ever felt overwhelmed when you take it out, to the point where you don't know where the threshold is, when you try to bring it back?
Where some kind of inner-voice says: "How do you even dare to think of that, you have no ideia how to do it...", like if only some are entitled to dream.
I'm afraid that may be the byproduct of the lack of knowledge to try to break the problem even to the smallest achievable parts, with current or near-future technology developments.
How can one tackle this? Is there a process? Should we just try to blindly find what matters/systems are involved, study them, and connect them? Because we should take responsibility for our vision.
Should we just pack our vision, display it to the world and see if it resonates on people who have knowledge in such areas required to achieve it?
Or should we simply drop it, and move on to the next one?
Wayne Gretsky: "You miss 100% of the shots you don't take". Baseball: 70% not hitting is just the overhead for the 30% hitting.
I.e. it's no big deal if you don't wrap your identity around it. My self criticism was all about being able to grind well enough with my colleagues to finish some of the ideas rather than my tendency to go off and have more.
As they use to say in the 60s "Keep on a truckin'"