Categories: Uncategorized

We’re all the same on the inside

No, seriously, that’s the whole point of this post: on the inside, we’re all the same; and you know what I mean. We all know we’re not, really–but in truth, we really are. You know what I mean, right?


Categories: Pop culture

On April Fools’ Day

April 1st used to be the hipster troll’s celebration: being a troll before it was cool to be a troll. Or rather, before there was even a word for “troll“.

I totally love the visual effect of combining “hipster” with the (misguided) popular perception of the noun “troll” (look it up above).


Categories: IT&C

Math + art = ❤️

I just remembered about an older math/art personal project you might be interested in: my silly AerialodSlopefield project back in 2020, built on my open-source code which uses @ephtracy‘s wonderful Aerialod renderer (a spin-off from his main Magicavoxel project).

Anyway, here’s a sample image:

Image 1
Click for more images

Audiophile 101 (for two speakers)

The single most important difference you can make in your music listening experience for the low, low cost of zero cents is geometry. If you haven’t tried optimizing the geometry before, I promise you’ll be totally astonished–it only takes a few minutes to check it out, it’s totally free, and the leverage is precisely as large for crappy plastic PC speakers as it is for professional studio monitors.

Here’s the basic recipe:

  1. The two speakers and your listening position should be the corners of an equilateral triangle; “listening position” means the center of your head when you’re focused on listening to music; check this with a ruler;
  2. Both speakers should be oriented directly towards your head; check this visually–you shouldn’t be able to see any of the sides of either speaker from your listening position; adjust as needed;
  3. As much as possible, try to keep the speakers level with your listening position: neither higher, not lower;
  4. The smaller the speakers, the smaller the triangle–and the lower the optimal listening volume;
  5. Do your best to avoid placing the speakers just in front of massive objects; both “massive objects” and “just behind” should be scaled with the speakers–for room-sized speakers, “massive objects” are walls, and “just behind” is less than 30-100 cm away; for PC speakers, “massive objects” are the PC itself (or larger), and “just behind” is less than 15-25 cm away.

Try it, I promise it’s going to be a more meaningful experience than buying gold-plated connectors for $200 speakers!


On warm welcomes

In thermodynamics, the temperature of an object is a measure of its entropy. And entropy could be interpreted as the number of interactions between particles, without any significant loss of generality.

In natural language, at least in English and in my native Romanian, we use temperature-related adjectives to describe human interactions and relationships: warm welcomes, cold business meetings, warm friendships, cold people, or the infamous lukewarm reception of an idea.

I don’t know about you, but I find it quite satisfying when natural language happens to align itself so well with thermodynamics: cold meetings and people interact little, warm welcomes and friendships interact lots.

All the world’s a stage,
And all the men and women merely particles caught in the inexorable thermodynamic dance of entropy

William Shakespeare, As You Like It (quoted from memory)

Categories: Humor, PC

BSOD

Original art. Sources: blue screen + the Reaper.


Categories: AI

A failed parallel between water sprouts and AI strategies

Find a tree. I mean a real-life, biological tree–and a mature one at that. Can you see any silly young branches that either shoot straight up for the sky, or ones which seem to explode in a crazy bunch sideways from the trunk? You can also approach the search from an aesthetic perspective: you’re looking for ugly, inharmonious young branches going all over the place.

If you can see any of those, you probably found water sprouts–and if so, you’re looking at an unhappy tree. Water sprouts are the tree’s middle age made manifest: I must have a plan B, hence I shall explore new strategies! Joking aside, water sprouts really are a sign of old age, bad pruning, damaged bark–or, as is the case for my own plum tree which triggered this blog post, all of the above.

Then again, I’m not an arborist. My entire train of thoughts was triggered by a false connection between my plum tree’s water sprouts and the wonderful AI strategy against overfitting. I only recently internalized overfitting itself, myself: it’s the AI equivalent of brittle programming. That is, your code (AI model) is so very constrained by the original functional requirements (training data) that it fails in a spectacular manner when presented with even the slightest variation from your original expectations (excellent results for training data, horrendous aberrations for test data).

But how do you guard against overfitting in AI? Well, in pretty much the same way you guard against brittleness in software development (or in a healthy human mind): you avoid dogma, and allow for flexibility instead. Specifically, you force the model to learn less than it could at each step during training. Counter-intuitive as that might be, this strategy allows it to alternate between learning dogma and exploring new avenues for learning.

Given the way this anti-overfitting learning strategy ends up alternating between reinforcing previously learned lessons and deliberately ignoring known lessons in favor of child-like, naive examination of the data, this learning strategy is known as the exploration/exploitation strategy: exploit the lessons already learned by reinforcing them, but make sure to also explore new learning avenues.

Well, when I noticed how my plum tree shot some branches straight up towards the sky I remembered about exploration/exploitation, and I thought I had found a wonderful parallel between biology and AI strategies–behold a tree which applies an AI strategy! Then I learned about water sprouts, and I realized I just happen to own an unkempt old tree, and that that has nothing whatsoever to do with AI or anything else, really.

Overfitting happens in meat brains, too.


Categories: Engineering

Class II lever out of rope

Did you know you can effectively build a class II lever out of plain rope? Just tie a trucker’s hitch knot on any old piece of rope, and you get a block and tackle system which provides the same mechanical advantage as a class II lever. Mind. Blown.


Categories: Morality

Self-driving cars prove that morality is objectively difficult

You’re probably already familiar with the trolley problem: should you act to save five people at the cost of killing one person, or should you choose not to act, allowing five people to die? In the original problem, I’d answer that the moral thing to do would be to act—if there are only two options, and if I’m the only person who can act, then I should act.

That’s settled, then: when you can act to save five people at the cost of killing one, you should. And if that’s a solid rule, we should always apply it. You’re a surgeon, and five of your patients desperately need organ transfers, or they will all die within the week—and they all need different organs. You can choose not to act, thus allowing all five people to die. Or you can act to save all five people by killing a healthy person walking down the street and collecting their organs. For some reason, this time it’s rather difficult to argue that you should act to save five people at the cost of killing one person.

As intriguing as that might be, it’s just a though experiment. But Melanie Mitchell’s wonderful Artificial Intelligence: A Guide for Thinking Humans cites an actual study which proves moral choices are equally ambiguous in real life. Consider an autonomous vehicle, a.k.a. self-driving car, having to choose between sacrificing ten pedestrians on an alley and killing the single passenger in the vehicle by veering into a building. In one survey, 76% of the respondents answered it would be morally preferable for the AI in the self-driving car to sacrifice the single passenger.

I’d answer the same, and I expect you would, as well. The surprise came later: they asked the very same respondents whether they’d buy a self-driving car which preferred to kill their passengers in order to save pedestrians by the same rules as above. They overwhelmingly declined.


Categories: DIY, PC

Things to consider when shopping for an M.2 drive

TL;DR: HDD < SSD < M.2 * < M.2 PCI-E * < M.2 PCI-E MLC * < M.2 PCI-E MLC ~3D

Things I learned today about M.2 drives: