The problem with 'sentiment analysis' of today is it requires a human labelled training dataset that is specific to a particular domain and time period. These are rather costly to make and have about a 12 month half-life in terms of accuracy because language surrounding any particular domain is always mutating - something 'sentiment analysis' models can't hope to handle because their ability to generalise is naught. I've worked with companies spending on the order of millions per year on producing training data for automated sentiment analysis models not unlike the ones in the parent post.
To get useful value out of automated sentiment analysis, that's the cost to build and maintain domain specific models. Pre-canned sentiment analysis models like the parent post linked are more often than not worthless for general purpose use. I won't say there are 0 scenarios where those models are useful, but the number is not high.
Claiming that sentiment analysis is 90something percent accurate, or even close to being solved, is extremely misleading.
To get useful value out of automated sentiment analysis, that's the cost to build and maintain domain specific models. Pre-canned sentiment analysis models like the parent post linked are more often than not worthless for general purpose use. I won't say there are 0 scenarios where those models are useful, but the number is not high.
Claiming that sentiment analysis is 90something percent accurate, or even close to being solved, is extremely misleading.