There is no real difference between "help me" and "give me money".
There's a huge difference. Actively putting in work alongside someone is a great bonding experience. Being "in it" with someone is completely different from putting up some money.
Did you replace the coffee intake with other liquids? Usually when I see people experience "caffeine headaches", they tend to be the types who get most of their liquid intake from caffeinated sources. So, they end up with a headache from mild dehydration.
I was starting my day with a pot of coffee, and water/herbal tea throughout the day. I figured a pot a day wasn't good, so I went cold turkey after I ran out; had a horrid dull throbbing headache for the next few days (thought I was coming down with something till I realised).
I'm typically not a coffee drinker but do have "binges" that last a few weeks at a time. I've actually been making sure to drink at least two liters of water a day, so that's puzzling.
Microware's OS-9 Level 1 had preemptive multitasking IIRC - this was late-1970s, early-80s - it was distributed by Radio Shack for the TRS-80 Color Computer line, which ran the 6809 8-bit processor, at something like .89 MHz (pretty f'in amazing when you think about it).
Vizio collected a selection of pixels on the screen that it matched to a database of TV, movie, and commercial content.
I would like to know more about that process. I find it ethically abhorrent, but technically very interesting.
Like, is it grabbing, say, three pixels in constant locations across the screen and matching their color change over time? Is it examining a whole block? Is it averaging a block at some proportional location on the screen?
You can dive in from there but it's basically either watermarking or fingerprinting of video and or audio frames. Video was preferred because there were fewer false positives from music beds. In a nutshell its video Shazam
I'm also curious if they'd be able to match different encodings of the same video or would only be able to match against specific encodings in their collection.
I would imagine it's simply a temporal comparison of pixel colors at predetermined locations, similar to how the Shazam algorithm[1] works? You'd just need to analyze enough pixels to reduce "collisions", coupled with the temporal aspect!
It's not encoding based - its frame based, basically bitmap data. It has to be to work across the whole video delivery pipeline so it's fairly fuzzy but also accurate.
They need a source to compare too so when we worked on it masters were being sent from the network to the sync technology group. So they had source data for comparison on the first broadcast of a show.
Outside of latency there's no reason they can't match against broadcast content off cable. For user tracking they can just log the fingerprint data and compare it later to source data for analytics so this works fine.
As a former Amiga user, I'm familiar with the Amiga's graphics co-processor(s). I actually forget now how many, I remember Copper and Blitter, but don't recall if those were chips, or functions on a single chip.
So, is this a sort of video adapter that converts the native Amiga video output to HDMI compatible signals, or is it full graphics card that brings Amiga graphics (32/64/4096 color, with acceleration) to modern output resolutions?
It is a graphics card in the sense of the "ReTargetable Graphics" (RTG) system. I ship drivers for the Picasso96 API (which also emulates the competing CyberGraphX API). All OS friendly Workbench/Intuition GUIs can then be used on a high resolution (up to 1280x720@60hz, 1920x1080@30hz) and color depths of 8 (Palette), 16 or 32 bit. A bunch of open source games like Doom, Abuse, ScummVM have been ported as system friendly applications, too, so these run fine. Old games that bang the hardware still go through the custom chipset of the Amiga and are output through the 15khz RGB connector, not via my card. But I'm currently working on an expansion that scan-doubles and upscales the classic video output, too.
(Edit:) On the hardware side, I implemented the Zorro bus protocol in the FPGA (in Verilog) and hooked it up to a SDRAM controller/arbitrator and DVI/HDMI encoder. There is also a simple blitter in the code.
Are you maybe thinking about adding more things on this one card? Ethernet, USB, memory (e.g. 1GB by default)...
Maybe even emulation of a faster CPU in FPGA (a'la the Vampire for A600).. though, it'd probably use the CPU slot instead of Z2/Z3.
Also, is it possible to hijack the function of the OCS/ECS/AGA directly via the Zorro slots (proxying the native output gfx output, e.g. with games), or maybe it'd require some kind of hardware bridge between the gfx chipset and your card?
Then later on there was the Enhanced Chip Set and the Advanced Graphics Architecture.
On that note, i think the A500 my parents got me back in the day were a oddity. I distinctly recall it having 1MB chip ram, suggesting it had the ECS inside. But it shipped with Workbench 1.x rather than 2.x.
That wasn't an oddity. It would have shipped with Workbench 1.3. I had one just like it with the Fatter Angus chip that could address up to 1MB of chip RAM, although you could still configure it as 512K chip and 512K fast, which is what I did.
I can't remember whether you did the 1MB chip config with a dip switch or by cutting a track on the motherboard - I think it may have been the latter, which is probably why I didn't do it.
I had a weird Amiga 500. It had late OCS chipset, but on hardware level its Denise could display bitplanes from slow RAM (512 kB RAM expansion)! Unfortunately I never tested blitter, copper lists, sample playback etc. from this memory range, so no idea whether those worked as well. Of course all is DMA access, so my guess is they would have.
RAM was mapped at 0x80000-0x100000 from chipset point of view. CPU saw same data at 0xc00000-0xc80000.
Amiga shipped with planar graphic modes, this card supplies chunky modes. Planar was great for ~10MHz CPU when moving 1-2 color images on the screen, but totally failed when technology moved on to 8-16bit color depth.
You could even argue Doom killed Amiga as a gaming machine.
I'm fine with the government as large as it currently is, I just want more of the stuff I like (infrastructure development, education, regulation enforcement, healthcare) and less of what I don't (overzealous law enforcement and military action, banks of federal lawyers creating fourth branches of government).
I'm afraid that isn't one of our choices. Everyone seems to have different view of what is 'right'. So unless you want to be bulldozed when your view doesn't happen to be a majority.. it's a choice really between small or big.
Well, everyone has a different view of what is big or small, so your position is just as invalid.
That said, I never said "right". I was quite clear about what facets of government I value (civil rights, transportation and social infrastructure) and what I don't like (war machine, fascist law enforcement). Oddly enough, there are political parties in line with that, so your unsubstantiated assertion that I don't get that choice is pretty ridiculous.
There's a huge difference. Actively putting in work alongside someone is a great bonding experience. Being "in it" with someone is completely different from putting up some money.