Let me give you an exercise I really like for this.
Imagine a file with endless list of URLs. For each URL check if it contains an image. If it does, identify the top 6 colors and store it memory so it can be printed/stored later.
In something like python this can be as short as 20 lines of code:
- open the file
- read line by line
- validate if the URL is valid
- call the URL
- check if it's image
- if image, read pixel by pixel and check the color
- store it in dictionary/hashmap
Here is what I look for:
- does it compile/work?
- what language did you choose? (like python is fast to write this in few lines but hard to introduce concurrency)
- have you introduced some kind of caching for the URLs? (so you don't waste resources)
- are you checking if the URL is valid or you just run them all and wait for errors/timeouts?
- are you checking the file type from HTTP header or body of the file?
- how do you handle errors?
- are you attempting concurrency? If yes, how?
I actually done this exercise myself in dozen or so languages to see what choices I make based on the language. Obviously one can spend significantly more time on it if they really desire, but the design choices are visible from the beginning.