The single most important difference you can make in your music listening experience for the low, low cost of zero cents is geometry. If you haven’t tried optimizing the geometry before, I promise you’ll be totally astonished–it only takes a few minutes to check it out, it’s totally free, and the leverage is precisely as large for crappy plastic PC speakers as it is for professional studio monitors.
Here’s the basic recipe:
The two speakers and your listening position should be the corners of an equilateral triangle; “listening position” means the center of your head when you’re focused on listening to music; check this with a ruler;
Both speakers should be oriented directly towards your head; check this visually–you shouldn’t be able to see any of the sides of either speaker from your listening position; adjust as needed;
As much as possible, try to keep the speakers level with your listening position: neither higher, not lower;
The smaller the speakers, the smaller the triangle–and the lower the optimal listening volume;
Do your best to avoid placing the speakers just in front of massive objects; both “massive objects” and “just behind” should be scaled with the speakers–for room-sized speakers, “massive objects” are walls, and “just behind” is less than 30-100 cm away; for PC speakers, “massive objects” are the PC itself (or larger), and “just behind” is less than 15-25 cm away.
Try it, I promise it’s going to be a more meaningful experience than buying gold-plated connectors for $200 speakers!
In thermodynamics, the temperature of an object is a measure of its entropy. And entropy could be interpreted as the number of interactions between particles, without any significant loss of generality.
In natural language, at least in English and in my native Romanian, we use temperature-related adjectives to describe human interactions and relationships: warm welcomes, cold business meetings, warm friendships, cold people, or the infamous lukewarm reception of an idea.
I don’t know about you, but I find it quite satisfying when natural language happens to align itself so well with thermodynamics: cold meetings and people interact little, warm welcomes and friendships interact lots.
All the world’s a stage, And all the men and women merely particles caught in the inexorable thermodynamic dance of entropy
Find a tree. I mean a real-life, biological tree–and a mature one at that. Can you see any silly young branches that either shoot straight up for the sky, or ones which seem to explode in a crazy bunch sideways from the trunk? You can also approach the search from an aesthetic perspective: you’re looking for ugly, inharmonious young branches going all over the place.
If you can see any of those, you probably found water sprouts–and if so, you’re looking at an unhappy tree. Water sprouts are the tree’s middle age made manifest: I must have a plan B, hence I shall explore new strategies! Joking aside, water sprouts really are a sign of old age, bad pruning, damaged bark–or, as is the case for my own plum tree which triggered this blog post, all of the above.
Then again, I’m not an arborist. My entire train of thoughts was triggered by a false connection between my plum tree’s water sprouts and the wonderful AI strategy against overfitting. I only recently internalized overfitting itself, myself: it’s the AI equivalent of brittle programming. That is, your code (AI model) is so very constrained by the original functional requirements (training data) that it fails in a spectacular manner when presented with even the slightest variation from your original expectations (excellent results for training data, horrendous aberrations for test data).
But how do you guard against overfitting in AI? Well, in pretty much the same way you guard against brittleness in software development (or in a healthy human mind): you avoid dogma, and allow for flexibility instead. Specifically, you force the model to learn less than it could at each step during training. Counter-intuitive as that might be, this strategy allows it to alternate between learning dogma and exploring new avenues for learning.
Given the way this anti-overfitting learning strategy ends up alternating between reinforcing previously learned lessons and deliberately ignoring known lessons in favor of child-like, naive examination of the data, this learning strategy is known as the exploration/exploitation strategy: exploit the lessons already learned by reinforcing them, but make sure to also explore new learning avenues.
Well, when I noticed how my plum tree shot some branches straight up towards the sky I remembered about exploration/exploitation, and I thought I had found a wonderful parallel between biology and AI strategies–behold a tree which applies an AI strategy! Then I learned about water sprouts, and I realized I just happen to own an unkempt old tree, and that that has nothing whatsoever to do with AI or anything else, really.
Did you know you can effectively build a class II lever out of plain rope? Just tie a trucker’s hitch knot on any old piece of rope, and you get a block and tackle system which provides the same mechanical advantage as a class II lever. Mind. Blown.
You’re probably already familiar with the trolley problem: should you act to save five people at the cost of killing one person, or should you choose not to act, allowing five people to die? In the original problem, I’d answer that the moral thing to do would be to act—if there are only two options, and if I’m the only person who can act, then I should act.
That’s settled, then: when you can act to save five people at the cost of killing one, you should. And if that’s a solid rule, we should always apply it. You’re a surgeon, and five of your patients desperately need organ transfers, or they will all die within the week—and they all need different organs. You can choose not to act, thus allowing all five people to die. Or you can act to save all five people by killing a healthy person walking down the street and collecting their organs. For some reason, this time it’s rather difficult to argue that you should act to save five people at the cost of killing one person.
As intriguing as that might be, it’s just a though experiment. But Melanie Mitchell’s wonderful Artificial Intelligence: A Guide for Thinking Humans cites an actual study which proves moral choices are equally ambiguous in real life. Consider an autonomous vehicle, a.k.a. self-driving car, having to choose between sacrificing ten pedestrians on an alley and killing the single passenger in the vehicle by veering into a building. In one survey, 76% of the respondents answered it would be morally preferable for the AI in the self-driving car to sacrifice the single passenger.
I’d answer the same, and I expect you would, as well. The surprise came later: they asked the very same respondents whether they’d buy a self-driving car which preferred to kill their passengers in order to save pedestrians by the same rules as above. They overwhelmingly declined.
All EEPROM is organized into conventional memory cells with variable storage capacity per cell;
All EEPROM cells are susceptible to failure after a given number of write cycles;
Irrelevant factoid: NAND is not not an acronym–it stands for Not And memory (as opposed to the other kind of Flash memory). I say it’s not not an acronym because people tend to assume acronyms are restricted to initial letters of words, when acronyms are in fact absolute beasts which can devour entire groups of letters at the beginning of words!
You definitely want PCI Express, not SATA III;
Even low end PCI-E M.2 drives are roughly 4x faster than SATA III drives of any kind;
You probably want MLC, not any of the other ones
*LC (*–Level Cell) indicates how many bits are written per memory cell. Given the production costs per cell, the write rate limitations per cell, and the MTBF per cell, packing more bytes per cell reduces costs, but it also limits overall data throughput and the drive’s lifespan;
MLC is in the serious hobbyist’s Goldilocks zone for the level cell size:
SLC (1x) is the best of the best, but it comes at a serious premium; MLC (2x) is below premium but above consumer level, then TLC (3x), QLC (4x), and at some point in the future PLC (5x);
You might want 3D technology, if you’re willing to spend extra;
3D memory storage sounds like marketing mumbo-jumbo, but it’s a serious technological leap. You can only pack so many cells on a 2D rectangle before you reach interference limits caused by fundamental physics – and the M.2 form factor does impose physical size limits. Although height is just as limited as length or width, it’s way more efficient to double the very tiny height of the board than it is to halve its very large length and still keep the cells isolated from interference and ultimate failure.