Most fascinating is that the addition of sarcosine to the diet was able to mimic intermittent fasting responses. Sarcosine has previously shown psychoactive effects at ameliorating schizophrenia and depression in humans.
Would be curious to here, too, as far ad networks go.
There are many affiliates for online casinos, crypto exchanges etc, they usually have a cpa + revshare deal.
They scrap the non converting audience, its vastly more expensive, but I imagine converting an impression from an add somewhere on an app or such to a paying depositor with life time values north of 200usd is just not going to be economically viable.
So far, all I have spoken to ended up a cpm model slightly shuffled.
There was a post on here recently on how someone designed their logo by asking DALL-E a load of questions. The rationale could be that it might not remove that job, but it would certainly be a powerful tool for a logo designer to take on way more clients, thus driving down the cost of logo design.
If I'm thinking of the same post that logo is poorly done / chosen. I know the author wanted very specific imagery in it and achieved that but I don't think that logo would be considered "good" by most.
i feel like this boils down to a client-side misunderstanding of where graphic designers actually spend the majority of their time. i'd actually argue that working from spec or even a crude doodle on the back of a napkin is a lot easier than dealing with a client who is married to a poor design that DALL-E puked out for them. one of the most important things about the sketch phase is to be able to iterate without too many assumptions while also being able to recognize and play upon the strengths of individual drawings. this is not a time consuming process but it also isn't something you want to rush anymore so than you'd want to rush the plastic surgeon performing your nose job. depersonalizing the design process in favor of volume does not particularly serve anyone and, moreover, it responds to a need that i don't really think exists.
this is not to say that i don't think these kinds of image generators are without uses, but right now we are still in the phase where people are being impressed by graphical fidelity[which was already incredibly cheap] rather than usefulness. imo this stage of ai will primarily work well for tasks that benefit from randomness while also being relatively low stake in the overall production pipeline. training a network to generate typefaces or to randomize npc portraits for an open world game is precisely the sort of area where 'gluing things together' can potentially provide more benefits than headaches.
I think it'll start by removing the need for editorial illustration and photography for a lot of blogs and websites, starting with the less reputable ones.
MidJourney is already good enough creating illustrations for articles, book covers, etc. Not something that The New Yorker will be interested in, but better than anything you can buy for a couple of dollars or small monthly subscription.
> I think it'll start by removing the need for editorial illustration and photography for a lot of blogs and websites, starting with the less reputable ones.
Stock photo sites and Google Images and basic graphic filters did that a long time ago. Sure, DALL-E has certain creative possibilities that those don't, and the niches like satirical memes or "first pass" concepts in agency brainstorming meetings where it could be absolutely revolutionary tend to not be deliverables the creators get paid for. The publications giving illustrators a reliable income are usually after their consistent style, fantasy world coherence or novel ideas which isn't exactly where DALL-E shines.
DALL-E is the writing on the wall for those with a competing talent to start looking for other (non-art) work they may want to do in the future. It would be an egregious error to think that AI art programs are some new tool to learn akin to photoshop. Maybe that will hold true for a few years at best.
There will come a point, much sooner than later, where the value in typing prompts into an AI is going to only be worth minimum wage.
Substitute "the camera" (and perhaps even more "the gramophone") for "DALL-E" and "AI art programs", and the 19th century wants its argument against the future of creative media back.
Cameras require skill to use. AI does not. You can just ask it to generate a skilled photograph. The AI completely fills the gaps that previous tech left for humans.
I just cancelled my plan of paying a graphics designer to prepare an android app (personal project) for launch. After playing with dall-e I'm confident that "I" can make a logo and some artwork "myself".
Edit: ment too say icon, not logo.
That does now make me curious to what degree Dall-E could be prompted to do some more ux-type work. So far I've only tested it with photographs/paintings.
Augmenting jobs is more likely, from what I have heard. It would be useful for rapid prototyping of artwork and could help designers get design specs to artists quicker.
The initial impressions are that production ready designs will require an artists touch. But this discussion was in the context of AA-Games and up. For indie and small games fully AI produced art may be fine.
The similarity you see between this DeepMind project, the DARPA program, and research in Tenenbaum's lab is not incidental: there's a steady stream of crosstalk and cross-training between machine learning researchers who engineering artificial intelligences and cognitive scientists who reverse-engineer human intelligences. (Note, for example, that Peter Battaglia, one of the co-authors of this DeepMind project, was a postdoc with Tenenbaum.)
That scans with observations from recent text-to-image works where there often seems to be an insight or two either left without citation or citing an unpublished work that they used to avoid testing an (ultimately) incorrect hypothesis.
I have seen some suggest that this is basically Google “allowing” the competition to catch up just to beat them a few weeks later but generally it just seems like they’re kind of all chatting with each other in the background instead.
Having pursued a PhD at Columbia and having taught classes there. I am surprised it took this long for someone to speak up. Also the fact that there are no checks in place to certify top ranking academic institutions is fascinating.
This is exactly right, when I was in grad school we also explored new ionization designs to reduce the voltage requirements to achieve this. In our explorations, biggest problems were weight and voltage issues.
Very cool to see though, you can make this at home with a very basic high voltage generator and a set of needles.
Vidrovr (https://www.vidrovr.com) unlocks insights trapped in unstructured multimedia data, such as audio, image and video, generated by businesses and governments. Vidrovr uses AI to automate manual tasks that people perform to utilize these data assets. This leads to x5 efficiency gains in performing their work. Vidrovr spun out of Columbia University's AI lab and has been invested in by premiere investors including Samsung Next and Verizon.
Vidrovr’s processing engine can streamline business operations that utilize unstructured data, through the use of various AI models and tools to extract and model insights locked in your data. Vidrovr currently services clients in a number of industries - including media companies like the AP and financial institutions. Additionally, Vidrovr provides services to various USG organizations.
Having written NSF grants and other grants - DARPA etc. I have won some and lost many. I compiled a few non obvious takeaways:
- Story telling is everything. It seems this is a huge lesson never taught in grad school.
- Technical details sometimes work against you. The author is absolutely right that getting the general thought process across is crucial.
- Who you get on your review committee tends to significantly skew the outcome. Can make or break your chances.
- I have known a lot of groundbreaking work funded by other money for these exact reasons. Sometimes good science is too far afield for people to understand. An anecdote i like from recent times is how Eric Betzig built the super res microscope. Here's the background.
The sad truth is $500k for a career grant seems like a lot when you're in the university, but when you get out and see where else money is being spent you realize how poor academia really is.
> The sad truth is $500k for a career grant seems like a lot when you're in the university, but when you get out and see where else money is being spent you realize how poor academia really is.
It's not just about the $500k, it's about the freedom to do your project your way from start to finish. I can't get $500k from any investors to do my work; their number one question is what's their ROI, which is not even in the universe of my concerns. The last thing I ever want to think about is how to profit from my research.
Another choice is to join a company and then convince that company my idea is worth while. If I get the greenlight to go ahead that's great, but then I'm still always living under the threat of being fired or the project getting canned. Any University is happy to have a CAREER recipient, and your award is portable between institutions.
And sure you'd possibly be able to convince your company to sponsor a project you're interested in leading, but the for something like $500k you'd better have a good business case for it, and I can't imagine them approving something that would last 5 years unless it really aligns with their business goals. In order to get the clout to actually ask for this kind of project you'd also need to be at the company for a while, whereas something like a CAREER award is intended for early-career faculty. Imagine going up to your boss and saying: "You know that job you hired me for? I'd like you to keep paying me my regular salary, but I'd also like $500k to work on my own pet project about 70% of the time from September to May, and 100% over the summer. Oh, and I'd like to do this for the next 5 years."
Respectfully, I disagree. The point you make about financial freedom, imho is an illusion. A few thoughts on your first point.
The NSF Career grant affords you the latitude to pursue your research interests, but those are inevitably aligned with the academic process as you are in pursuit of tenure. So you need to publish and you need to produce publishable work- very unlikely you will choose only high risk moonshot projects as your likelihood of receiving tenure is directly correlated to those publications. Furthermore, your statement that the NSF or the university is not an investor, is also flawed imo. The NSF specifically asks for impact as it ties the money it allots a research to economic impact to the US- as it should since it is tax payer money. I.e your investor is the US. A similar argument can be drawn to the university that provides you startup lab funds- their long term goal is to receive publications/ patents/ prestige they can the monetize against through either selling those ideas/ receiving royalties or donations go their foundations. As you become a more famous researcher you also attract more masters students who pay a good deal of money to go to school there. They are in fact your investor only expecting a different ROI...
As for companies- how much do you think an engineer or ml researcher costs per year to a company especially the caliber in a Phd lab? 500k expense is pretty small. your assumption that a company wont give you 500k to fo work is an illusion- they do it's just not hard cash, they spend on resources that you use. An average PhD level base salary at a Fang - 200+k plus bonus/ equity closing on over 300k?
To the first point, starting in 2017 the Career solicitation dropped the tenure track requirement. I’m not tenure track and I have no aspiration of perusing tenure.
Your point is taken about there being “investors” no matter what, but the expectations of those groups couldn’t be any more different. NSF is happy if you spend your money on grad students. The idea of building a research program that is attractive to paying masters students is a far different prospect from building a profit making venture. I’m quite comfortable talking about and demonstrating the broad impacts of my work, but those impacts don’t include making investors a boatload of money in terms of selling a product. That’s just not in the cards. And to put a finer point on it, I’m much more comfortable with my investors being the general American public and my country as opposed to principally rich people.
To your second point, I agree that it costs a lot for a company to hire an ML engineer as an employee. But I would say it is not typical for a a researcher with maybe only a couple years of experience post grad school to be offered $500k to work on any project of their own choosing for 5 years (or more, let’s just be clear that $500k is like the minimum you’re even aloud to request for a career award).
I mean, maybe that is happening, I’ve honestly never heard of that. Does anyone here have that kind of arrangement, or have you heard of anyone who has been able to pull that off? Would be very interested to know how to approach a company to convince them of that. Pay me a salary, give me $500k, let me work on whatever I want for 5 years at my own direction, and that would sound comparable to the kind of freedom you get with a career award.
`The sad truth is $500k for a career grant seems like a lot when you're in the university, but when you get out and see where else money is being spent you realize how poor academia really is.`
This hits me every now and again.
But your first point is huge, and one thing we've been working on with our program's grant writing class. Students phone in their specific aims pages to go write their approach sections, which is what they're working on, and that's a huge mistake.
The takeaways are very true. I would go further to say that not only is the story telling and the scientific vision a very important part of the sell, the language and terminology needs to be in sync with the reviewers. Often times researchers write very technical details in the proposal that is hard to parse and backfires.
In non-blind proposals, pre-communication is the key.
It seems to me the biggest challenge for SE transitioning into ML is that ML is a very broad topic and people conflate a lot of roles together. From purely research based questions (backbones, optimizers, initializers etc), to more 'MLOps' like pipelining questions, which tend to fall into the classical engineering / dev ops buckets. So the real question is what type of ML do you want to do?
If you're looking to land a job at FAIR / Deepmind or Google Brain/ Nvidia Research as a researcher or ML scientist the expectations of knowledge are very different than 'data science'. These are research lab groups, that work on pushing the state of the art forward. They are also supported by great engineers, building awesome tools that improve ML research. So transitioning into this sort of role requires more than doing Kaggle competitions, it requires developing an intuition for the respective ML subfield / and trying new things and usually failing. i.e. this is a research role and will require a lot of study and learning
If on the other hand you are looking for datascience / take model and build pipeline to run AI, or perform hyper param sweeps or simply modify some model code, then on I would say that is much more engineering than research ML. This has a much lower barrier to entry coming from engineering and could be a good stepping stone to a transition into pure ML research.
On a more general note to consider when thinking of transitioning to ML is that these systems are probabilistic in nature vs purely deterministic as they are in more general software systems. People (ie humans) are bad at wrapping their heads around distributional processes - you can see this in all fields that deal with them (Quantum vs Classical Physics, Biological Systems etc).
In general I guess what I have seen is when engineers try to dip their toes into ML, what's required is a mindset shift in how to approach problems. Once that happens the depth of that shift determines the type of role with ML you wish to pursue.
It's interesting that you mention this, because there's quite an impressive resurgence of privately funded R&D going on in the ML space. We're in an interesting phase, where the field is moving too fast to have 'canonical' methodologies (though we're getting close). To be an engineer in the deep learning space often requires reading and keeping up with research. Everything just gets dumped on arXiv, because the peer-reviewed publication cycle is almost too slow for the field.
Making a successful transition to ML in my opinion, depends a lot on the individual. Without a strong background in calculus, linear algebra and statistics, it's going to be difficult. Training a model is what people tend to focus on, but in my opinion, that's the easy part. Evaluating/validating a model, analyzing and preparing your data, anticipating model performance, understanding what to do to improve your fit, model selection or architecture. Developing custom deep learning architectures at times requires a bit of an abstract mathematical intuition that I think will suit many engineers very well. A lot of engineers are well equipped to be successful in making a transition, but on the other hand, at least as many aren't.
In the future, I think the field will have many varying degrees of expertise, with the barriers to entry becoming lower all the time. We're reaching a point where some common use cases can be solved adequately in a nearly automated fashion. Some "autoML" tools don't really require any real understanding of ML, though I think it's not wise to get in the habit of using them without understanding how to evaluate a fit. These tools will be great for people who want to occasionally use ML to solve some smaller problems, but as a part of their larger job function.
In some middle area, ML engineers and practitioners will be training and operationalizing models, and keeping up with major developments in research. But there will be some significant changes in the next decade. I predict the nebulous of data science, data analysis and machine learning will become formalized into 3 major skills - exploratory data analysis, machine learning and advanced computational statistics.
At the lowest level, researchers will continue developing the field, which like you say, is probably not something you transition directly into.
Forgive me if this is ignorant. Wouldn't an adblock simply need to inject an impersonation payload into the page, so the report would send incorrect attribution to the proxy server?
In case of Google it could be (initially) quite simple. Randomly change um-Parameters, gclid-Param and the like. This would at least make marketing tracking more "interesting".
Years ago there was an extension that did that for GA and Adobe Analytics at least.
But that would only be an arms race. We (analysts and marketing agencies) would obfuscate the params we use and switch that in the server side container.
https://www.frontiersin.org/articles/10.3389/fphar.2022.8841...