Follow TV Tropes

Following

Darth Wiki / Idiot Design

Go To

https://static.tvtropes.org/pmwiki/pub/images/1eab8c5a07a21d085abb7311a2.jpg

So you wanna know where I learned about dovetail joints? My high school beginner's woodworking class. It must have been brainstorming amateur hour at Huawei when they suggested joining aluminum and plastic with a woodworking joint.
Zack Nelson, on one of the reasons the Nexus 6P failed the bend test so catastrophically.

Every once in a while, we encounter an item with a design flaw so blatant that one can only wonder how no one thought to fix that before releasing it to the public. Whether it be the result of unforeseen consequences of a certain design choice, favoring style over functionality, cost-cutting measures gone too far, rushing a product to meet a certain release date at the cost of skipping important testing, or simply pure laziness on the creators' parts, these design flaws can result in consequences ranging from unintentional hilarity, to mild annoyance, to rendering the product unusable, to potentially even putting the users' lives in danger.

See also Idiot Programming for when the flaw comes from poor coding practices. See the real life section of The Alleged Car for automotive examples.

    open/close all folders 

Specific companies:

    Apple 
This video highlights Apple's many, many hardware design failures in 2008 onwards, some of which are listed below:

  • The original Apple II was one of the first home computers to have color graphics, but it had its share of problems:
    • Steve Wozniak studied the design of the electronics in Al Shugart's floppy disk drive and came up with a much simpler circuit that did the same thing. But his implementation had a fatal flaw: the connector on the interface cable that connected the drive to the controller card in the computer was not polarized or keyed - it could easily be connected backwards or misaligned, which would fry the drive's electronics when the equipment was powered up (Shugart used a different connector which could not be inserted misaligned, and if it were connected backward it wouldn't damage anything; it just wouldn't work). Apple "solved" this problem by adding a buffer chip between the cable and the rest of the circuit, whose purpose was to act as a multi-circuit fuse which would blow if the cable were misconnected, protecting the rest of the chips in the drive.
    • The power switch on the Apple II power supply was under-rated and had a tendency to burn out after repeated use. Unlike the "fuse" chip in the disk drives (which was socketed), the power switch was not user-replaceable. The recommended "fix": leave the power switch "on" all the time and use an external power switch to turn the computer off. Fine if annoying in the UK, Ireland, Australia, New Zealand, and other countries where most, if not all, wall sockets are switched; not so fine in mainland Europe or North America, where switched wall sockets were rare at the time. At least one vendor offered an external power switch module shaped to fit nicely behind the Apple II, but most users simply plugged their computer into a standard power strip and used its on/off switch to turn their equipment off.
  • The old Apple III was three parts stupid and one part hubris; the case was completely unventilated and the CPU didn't even have a heat sink. Apple reckoned that the entire case was aluminum, which would work just fine as a heat sink, no need to put holes in our lovely machine! This led to the overheating chips actually becoming unseated from their sockets; tech support would advise customers to lift the machine a few inches off the desktop and drop it, the idea being that the shock would re-seat the chips. It subsequently turned out that the case wasn't the only problem, since a lot of the early Apple IIIs shipped with defective power circuity that ran hotter than it was supposed to, but it helped turn what would have otherwise been an issue that affected a tiny fraction of Apple IIIs into a widespread problem. Well, at least it gave Cracked something to joke about.
    • A lesser, but still serious design problem existed with the Power Mac G4 Cube. Like the iMacs of that era, it had no cooling fan and relied on a top-mounted cooling vent to let heat out of the chassis. The problem was that the Cube had more powerful hardware crammed into a smaller space than the classic iMacs, meaning that the entirely passive cooling setup was barely enough to keep the system cool. If the vent was even slightly blocked, however, then the system would rapidly overheat. Add to that the problem of the Cube's design being perfect for putting sheets of paper (or worse still, books) on top of the cooling vent, and it gets worse. Granted, this situation relied on foolishness by the user for it to occur, but it was still a silly decision to leave out a cooling fan (and one that thankfully wasn't repeated when Apple tried the same concept again with the Mac Mini).
    • Another issue related to heat is that Apple has a serious track record of not applying thermal grease appropriately in their systems. Most DIY computer builders know that a rice grain-sized glob of thermal grease is enough. Apple consistently cakes chips that needed it with thermal grease.
    • Heat issues are also bad for MacBook Pros. Not so much for casual users, but very much so for heavy processor load applications. Since the MBP is de rigeur for musicians (and almost as much for graphic designers and moviemakers), this is a rather annoying problem since Photoshop with a lot of images or layers or any music software with a large number of tracks will drive your temperature through the roof. Those who choose to game with a MBP have it even worse - World of Warcraft will start to cook your MBP within 30 minutes of playing, especially if you have a high room temperature. The solution? Get the free software programs Temperature Monitor and SMCFanControl, then keep an eye on your temps and be very liberal with upping the fans. The only downsides to doing so are more noise, a drop in battery time, and possible fan wear, but that's far better than your main system components being fried or worn down early.
  • The very first iPhone had its headphone jack in a recession on the top of the phone. While this worked fine with the stock Apple earbuds, headphones with larger plugs wouldn't fit without an adapter.
  • Apple made a big mistake with one of their generations of the iPhone. Depending on how you held it, it could not receive signals. The iPhone 4's antenna is integrated into its outside design and is a bare, unpainted aluminum strip around its edge, with a small gap somewhere along the way. To get a good signal strength it relies on this gap being open, but if you hold the phone in a certain way (which "accidentally" happens to be the most comfortable way to do so, especially if you're left-handed), your palm covers that gap and, if it's the least bit damp, shorts it, rendering the antenna completely useless. Lacquering the outside of antenna, or simply moving the air gap a bit so it doesn't get shorted by the user's hand, would've solved the problem in a breeze, but apparently Apple's much more concerned about its "product identity" than about its users. Apple suggested that users were "holding it wrong". As it turns out, Apple would soon be selling modification kits for $25 a pop, for an issue that, by all standards, should have been fixed for free, if not discovered and eliminated before it even hit the market. Apple got sued from at least three major sources for scam due to this.
  • MacBook disc drives are often finicky to use, sometimes not reading the disc at all and getting it stuck in the drive. The presented solutions? Restarting your computer and holding down the mouse button until it ejects. And even that isn't guaranteed - sometimes the disc will jut out just enough that the solution won't register at all and pushing it in with a pair of tweezers finishes the job. To put this in perspective, technologically inferior video game consoles like the Wii and PlayStation 3 can do a slot-loading disc drive far better than Apple apparently can.
  • Say what you will about the iPhone 7's merging of the headphone and charger jacks, but there's no denying that upon closer inspection, this combined with the lack of wireless charging creates a whole new problem. As explained in this video, the charger jack is only capable of withstanding a certain amount of wear and tear (between 5,000 and 10,000 plugs, although only Apple themselves know the exact number). Because you're now using the same jack for two different things, chances are you'll wear it out twice as fast as any other iPhone. Because the phone doesn't have a wireless charging function like most other phones, this means that if this happens your phone is toast.
  • The Apple Magic Mouse 2 tried to improve upon the original Magic Mouse by making its battery rechargeable. Unfortunately, this choice was widely ridiculed as for some reason the charging port was located on the underside of the mouse, rendering it inoperable while plugged in. One wonders why they couldn't just put the port on the front, like almost every other chargeable mouse. Apparently, it's to preserve the mouse's aesthetics, in yet another case of Apple favoring aesthetics over usability.
  • Several users have reported that plugging a charger into their Thunderbolt 3 MacBook Pro makes the left topmost port output 20V instead of the standard 5V, effectively frying whatever is plugged in. Four ports you could plug a charger in, and they didn't test what happens if you plug the charger in anywhere else.
  • The iMac G3 was a hugely successful computer that pulled Apple right out of its Audience-Alienating Era that started and ended with Steve Jobs' absence and return. It won consumers over with its elegant design, simplistic usability, and several big innovations like prioritizing USB in an era where it was still seen as mostly a footnote. It also came supplied with what is considered one of the absolute worst mouse designs of all time, if not the worst. Nicknamed the hockey puck mouse due to its small, flat circular shape, the mouse was heavily criticized not only for being extremely uncomfortable to use, but also for being very easy to accidentally rotate while using it and requiring the user to reorient it all the time, with nothing about its design able to keep it oriented in the right direction. The mouse is also very light with little to no weightiness to speak of, making it easy to also accidentally lift up. It's a great example of a product attempting to look different simply for the sake of looking different, with little regard to why we use the tried-and-true methods to begin with. Unsurprisingly, their next mouse would revert back to the far more common oval shape that more nicely fits in a typical palm, whose worst problem was still having only a single button well after many other computers had all but standardized around three-button mice.

    Intel 
  • The "Prescott" core Pentium 4 has a reputation for being the worst CPU design in history. It had some design trade-offs which lessened the processor's performance-per-clock over the original Pentium 4 design, but theoretically allowed the Prescott to run at much higher clockspeeds. Unfortunately, these changes also made the Prescott vastly hotter than the original design — something that was admittedly exacerbated than their 90nm manufacturing process actually being worse for power consumption than the previous 130nm process when it came to anything clocked higher than around 3GHz — making it impossible for Intel to actually achieve the clockspeeds they wanted. Moreover, they bottlenecked the processor's performance, meaning that Intel's usual performance-increasing tricks (more cache and faster system buses) did nothing to help. By the time Intel came up with a new processor that put them back in the lead, the once hugely valuable Pentium brand had been rendered utterly worthless by the whole Prescott fiasco, and the new processor (based on the Pentium III microarchitecture) was instead called the Core 2. The Pentium name is still in use, but is applied to the mid-end processors that Intel puts out for cheap-ish computers, somewhere in between the low-end Celerons and the high-end Core line.
    • While the Prescott iteration of the design had some very special problems of its own, the Pentium 4 architecture in general had a rather unenviable reputation for underperforming. The design was heavily optimised in favor of being able to clock to high speeds in an attempt to win the "megahertz war" on the grounds that consumers at the time believed that higher clock speed equaled higher performance. The sacrifices made in the P4 architecture in order to achieve those high clock speeds, however, resulted in very poor performance per tick of the processor clock. For example, the processor had a very long instruction decode pipeline note  which was fine if the program being executed didn't do anything unexpected like jump to a new instruction, but if it did it would cause all instructions in the pipeline to be discarded, stalling the processor until the new program execution flow was loaded into the pipeline - and because the pipeline was a lot deeper than the previous Pentium III, the processor would stall for several clock cycles while the pipeline was purged and refreshed. The science of branch prediction was in its infancy at that point, so pipeline stalls were a common occurrence on Pentium 4 processors. This combined with other boneheaded design decisions like the omission of a barrel shifter note  and providing multiple execution units but only having one be able to execute per clock cycle under most circumstances meant that the contemporary Athlon processor from AMD could eat the P4 alive at the same clock speed due to a far more efficient design (the last problem was partially solved with the concept of "hyperthreading", presenting a single core processor to the OS as a 2-core processor, and using some clever trickery in the chip itself to allow the execution units that would otherwise sit idle to execute a second instruction in parallel provided it meets certain criteria).
  • The Prescott probably deserves the title of worst x86 CPU design ever (although there might be a case for the 80286), but allow us to introduce you to Intel's other CPU project in the same era: the Itanium. Designed for servers, using a bunch of incredibly-cutting-edge hardware design ideas. Promised to be incredibly fast. The catch? It could only hit that theoretical speed promise if the compiler generated perfectly optimized machine code for it. It turned out you couldn't optimize most of the code that runs on servers that hard, because programming languages suck,note  and even if you could, the compilers of the time weren't up to it. It also turned out if you didn't give the thing perfectly optimized code, it ran about half as fast as the Pentium 4 and sucked down twice as much electricity doing it. This was right about the time server farm operators started getting serious about cutting their electricity and HVAC bills, too.
    • Making things worse, this was actually Intel's third attempt at implementing such a design. The failure of their first effort, the iAPX-432, was somewhat forgivable given that it wasn't really possible to achieve what Intel wanted on the manufacturing processes available in the early 1980s. What really should have taught them the folly of their ways came later in the decade with the i860, a much better implementation of what they had tried to achieve with the iAPX-432... which still happened to be both slower and vastly more expensive than not only the 80386 (Intel released the 80486 a few months before the i860) but also the i960, a much simpler and cheaper design which subsequently became the Ensemble Dark Horse of Intel and is still used today in certain roles.
    • In the relatively few situations where it gets the chance to shine, the Itanium 2 and its successors can achieve some truly awesome performance figures. The first Itanium, on the other hand, was an absolute joke. Even if you managed to get all your codepaths and data flows absolutely optimal, the chip would only perform as well as a similarly clocked Pentium III. Even Intel actually went so far as to recommend that only software developers should even think about buying systems based on the first Itanium, and that everyone else should wait for the Itanium 2, which probably ranks as one of the most humiliating moments in the company's history.
      • The failure of the first Itanium was largely down to the horrible cache system that Intel designed for it. While the L1 and L2 caches were both reasonably fast (though the L2 cache was a little on the small side), the L3 cache used the same off-chip cache system designed three years previously for the original Pentium II Xeon. By the time the Itanium had hit the streets however, running external cache chips at CPU speeds just wasn't possible anymore without some compromise, so Intel decided to give them extremely high latency. This proved to be an absolutely disastrous design choice, and negated the effects of the cache. Moreover, Itanium instructions are four times larger than x86 ones, leaving the chip strangled between its useless L3 cache, and L1 and L2 caches that weren't big or fast enough to compensate. Most of the improvement in Itanium 2 came from Intel simply making the L1 and L2 caches similar sizes but much faster, and incorporating the L3 cache into the CPU die.
  • Intel Atom was not that far. The first generation (Silverthorne and Diamondville) were even slower than a Pentium III. Yeah, despite having low power consumption, the CPU performance was awful. To make it worse it only had support for Windows XP and it was still lagging. Following generations prior to Bay Trail were mere attempts to be competitive, but sadly they were even slower than a VIA processor (and that was considered a slow chip in their time).
    • While the Diamondville (N270 and N280) was just barely fast enough to power a light-duty laptop, the Silverthorne (Z530 and Z540) was meant for mobile devices and had even lower performance, entirely insufficient for general-purpose computing. But the mobile market was already well in the hands of ARM chips, so Intel ended up with warehouses full of Silverthorne CPUs that nobody wanted. And so it was that they enacted license restrictions that forced manufacturers to use Silverthorne CPUs for devices with screens wider than 9 inches, scamming the public into buying laptops whose abysmal performance infuriated anyone who bought them, turning many off the concept of the netbook as a whole.
    • And then there's Intel SoFIA. They have success in the mobile area with Moorefield (Bay Trail chips with Power VR GPU) and people were expecting to continue with that path, but they decided to use Mali GPU instead with constrained x86 chips in order to reduce cost. Sadly, their initial chips were as fast as the slowest ARM chip available at the time (which was the ARM A7) and even worse. That ARM's slowest chips were being replaced by A53/A35 chips as the new low tier, making Intel far behind. No wonder why they cancelled SoFIA.
  • While Intel's CPU designers have mostly been able to avoid any crippling hardware-level bugs since the infamous FDIV bug in 1993 (say what you will about the Pentium 4, but at least it could divide numbers correctly), their chipset designers seem much more prone to making screw-ups:
    • Firstly there was the optional Memory Translator Hub (MTH) component of the 820 chipset, which was supposed to allow the usage of more reasonably-priced SDRAM instead of the uber-expensive RDRAM that the baseline 820 was only compatible with. Unfortunately the MTH didn't work at all in this role (causing abysmally poor performance and system instability) and was rapidly discontinued, eventually forcing Intel to create the completely new 815 chipset to provide a more reasonable alternative for consumers.
    • Then there were the 915 and 925 chipsets; both had serious design flaws in their first production run, which required a respin to correct, and ultimately forced Intel to ditch the versions they had planned with Wi-Fi chips integrated into the chipset itself.
    • The P67 and H67 chipsets were found to have a design error that supplied too much power to the SATA 3Gbps controllers, which would cause them to burn out over time (though the 6 GBps controllers were unaffected, oddly enough).
    • The high-end X79 chipset was planned to have a ton of storage features available, such as up to a dozen Serial SCSI ports along with a dedicated secondary DMI link for storage functions... only for it to turn out that none of said features actually worked, meaning that it ended up being released with less features than its consumer counterparts.
    • A less severe problem afflicts the initial runs of the Z87 and H87 chipsets, in which USB 3.0 devices can fail to wake up when the system comes out of standby, and have to be physically disconnected and reconnected for the system to pick them up again.
    • RDRAM was touted by Intel and Rambus as a high-performance RAM for the Pentium III to be used in conjunction with the 820. But implementation-wise, it was not up to snuff (in fact, benchmarks revealed that applications ran slower with RDRAM than with the older SDRAM!), and very expensive; third-party chipset makers (such as SiS, who gained some fame during this era) went to cheaper DDR RAM instead (and begrudgingly, so did Intel, leaving Rambus with egg on its face), which ultimately became the de facto industry standard. RDRAM still found use in other applications, though, like the Nintendo 64 and PlayStation 2... where it turned out to be one of the biggest performance bottlenecks on both systems — the N64 had twice the memory (and twice the bandwidth on it) of the PlayStation, but such high latency on it, combined with a ridiculously small buffer for textures loaded into memory to be applied, that it negated those advantages entirely, while the PS2's memory, though having twice the clock speed of the Xbox's older SDRAM, could only afford half as much memory and half the bandwidth on it, contributing to it having the longest load times of its generation.
      • A small explanation of what happened: Rambus RDRAM memory is more serial in nature than more-traditional memory like SDRAM (which is parallel). The idea was that RDRAM could use a high clock rate to compensate for the narrow bit width (RDRAM also used a neat innovation: dual data rate, using both halves of the clock signal to send data; however, two could play that game, and DDR SDRAM soon followed). But there was two problems. First, all this conversion required additional complex (and patented) hardware which raised the cost. Second, and more critically, this kind of electrical maneuvering involves conversions and so on, which adds latency...and memory is one of the areas where latency is a key metric: the lower the better. SDRAM, for all its faults, operated more on a "keep it simple stupid" principle, and it worked, and later versions of the technology introduced necessary complexities at a gradual pace (such as the DDR2/3 preference for matched pairs/trios of modules), making them more tolerable.
    • Another particularly egregious chipset screw-up was with the integrated versions on the aforementioned Silvermont and Airmont/Braswell Atom chips. In late 2016, Intel revealed that the clock for the LPC bus on server models could fail, shutting down the bus and cutting off access to a host of essential and not-so-essential components. More often than not, an affected device would turn into a brick, not even able to boot. To their credit, Intel eventually managed to release a new series of server chips which corrected the problem...only to find out months later that many of their consumer-grade Silvermont chips could experience the same problem, only worse. With the consumer chips, it wasn't just the LPC bus that tended to break - the USB and SD card controllers often fried themselves as well. Once again, Intel rushed out a new "stepping" of the affected Atom products, this time alongside updated firmware that coddled the earlier, broken versions into working as long as possible. Think that's the end of this little story? Think again. When Intel moved to the succeeding Airmont series of Atom processors, they somehow managed to screw up the design even more: the USB controller was fixed, but now the real-time clock (which holds the time and date) would wear out early instead. At that point, the team at Intel just went "to Hell with it" and left Airmont as-is.
  • Intel's SSDs have a particular failure mode that rubs people the wrong way: After the drive determines that its lifespan is up from a certain amount of writes, the drive goes into read-only mode. This is great and all until you consider that the number of writes is often lower compared to other SSDs of similar technology (up to about 500 terabytes versus 5 petabytes) and that you only have one chance to read the data off the drive. If you reboot (and chances are you would, because guess what happens when the OS drive suddenly becomes read-only and the system can no longer write to its swap file? Yep, Windows freaks out and starts intermittently freezing and will eventually BSOD), the drive then goes to an unusable state, regardless of whether or not the data on the drive is still good. Worst of all is that Intel's head of storage division simply brushed it off, claiming that "the data on the disk is unreliable after its life is up" and that "people should be making regular backups nightly".
  • In early 2018, a hardware bug across all x86-based Intel processors (except the pre-2013 Atoms) released since 1995 known as Meltdown caused speculative code (that is, machine code that the CPU predicts that it will need to run and tries while it waits for the "actual" instructions to arrive) to be run before any checks that the code was allowed to run at the required privilege level. This could cause ring-3 code (user-level) to access ring-0 (kernel-level) data. The result? Any kernel developers developing for Intel needed to scramble to patch their page tables so they would do extra babysitting on affected processors. This could cause a 7%-30% performance reduction on every single Intel chip in the hands of anyone who had updated their hardware in the last decade, with performance loss depending on multiple factors. Linux kernel developers considered nicknaming their fix for the bug "Forcefully Unmap Complete Kernel With Interrupt Trampolines" (FUCKWIT), which is surely the adjective they thought would best describe Intel hardware designers at that time. It subsequently turned out that Intel's weren't the only CPUs to be vulnerable to this form of attack, but of the major manufacturers, theirs are by far the most severely affected (AMD's current Ryzen line-up is almost immune to the exploits, and while ARM processors are also vulnerable they tend to operate in much more locked-down environments than x86 processors, making it harder to push the exploit through). The performance penalties that the fix required ended up ensuring that when AMD released their second-generation Ryzen processors later that year, it turned what would likely have been a fairly even match-up into one where AMD was able to score some very definite victories. It also forced Intel to redesign their future chipsets from the ground up to incorporate the fixes into their future processor lines.
    • It's worth noting that after the subsequent updates and fixes, the actual performance loss caused by the Meltdown fix is quite variable, and some people with Intel chips may not notice a difference - for example, the performance loss in gaming desktops/in-game benchmarks is negligible at best (with the modern Coffee Lake chipset in particular being virtually unaffected), while performance loss in server Intel chips is much more pronounced. It's still an utterly ridiculous oversight on Intel's part, though.
    • Meltdown later turned out to be just one member of a much larger family of hardware flaws called the transient execution CPU vulnerabilities, and they of leaking secure data across multiple protection boundaries, including those which were supposed to stop Meltdown. Researchers have concluded that the only way to fix the flaws permanently without incurring colossal slowdowns is to completely redesign the offending chips to a much greater extent than was first thought to be needed.
    • As for why Intel processors were designed processors this way, a Stack Exchange post offers an answer: the conditions that allow the Meltdown exploit to happen in the first place are a rare case in practice, and in the name of keeping the hardware as simple as possible, which is practical for a variety of reasons, nothing was done to address this edge case. Along with most other security issues, unless there's a practical use for an vulnerability and the effort to exploit it is low enough, it may not be addressed since the chances of it happening are not worth the inconveniences it comes with. As an analogy, you can make your home as secure as Fort Knox, but there's no reason to do so if your home isn't under the same conditions (i.e. holding something valuable and people are aware of it).
  • In the early '90s, to replace the aging 8259 programmable interrupt controller, Intel created what they called the Advanced Programmable Interrupt Controller, or APIC. Configuring the APIC typically required writing to some address space and it was configurable. Since the APIC was backwards-compatible with the 8259 and the 8259 had a fixed location, older software would break if the APIC wasn't located somewhere the software was expecting. So Intel allowed the memory address window of the APIC to be moved. This wouldn't be so much of a problem until they introduced the Intel Management Engine (IME), a standalone processor designed to manage the PC such as cooling and remote access. The problem with the IME is it's a master system that can control the CPU so care needs to be taken that it can't be taken over. But security research in 2013 found out that in earlier systems with IME, you could slide the APIC address window over where the address space for IME is located, inject code into it, and do whatever you want with the compromised computer and nothing in the computer can stop this because the IME has the highest level of access of anything in the computer. Intel independently found this around the same time, and it has since been fixed by denying the APIC address window from sliding over the IME one.
  • The 80286 introduced "protected mode," the first memory management for x86 processors. The only problem is that it was impossible to switch back to "real mode," which many MS-DOS programs required to run. This is why Bill Gates called the chip "brain-damaged." The chip was still popular, powering the IBM PC AT and many clones, but mainly as just a fast 8088 processor. The 286 was popular for running multiuser, multitasking systems with the UNIX variant XENIX, where the need to switch back to real mode was not an issue. The 80386 fixed this issue, being able to switch between modes.
  • In early 2021, with Intel getting completely crushed by AMD's Zen 3 architecture — thanks to their 10nm process for years being an utter disaster which couldn't produce anything more than a four-core mobile chip (and it took until late 2019 just to be able to produce that), leaving them unable to create anything more than higher-core variants of their 2015-era Skylake core on the desktop side — they took the desperate move of trying to back-port one of their planned 10nm designs onto the 14nm process used by Skylake, producing the Rocket Lake core. The end result of trying to port a core onto a process that had far higher power demands than it was designed for was an incredibly power-hungry, thermally-constrained chip that, in a near-repeat of Prescott, ended up more often than not performing worse than its predecessor, thanks to Intel having to reduce it from 10 cores to 8 to keep the power consumption under control. Adding insult to injury, later that year finally saw the release of Intel's first mainline 10nm processor (on the fourth major revision of the process), Alder Lake, which finally brought some much-needed performance advances and made Intel competitive again, to the point where even the mid-range Core i5s based on Alder Lake outperformed the entire Rocket Lake line-up.

    AMD 
  • AMD's wildly successful Athlon Thunderbird ran at high speeds and for a while obliterated everything else on the market, but it was also the hottest CPU ever made up until that point. This wouldn't be so bad in and of itself - even hotter CPUs were made by both AMD and Intel in later years - but the Thunderbird was special in that it had no heat-management features whatsoever. If you ran one without the heatsink - or, more plausibly, if the heavy chunk of aluminium sitting on the processor broke the mounting clips through its sheer weight and dropped to the floor of the case - the processor would insta-barbecue itself.
  • In late 2006 it was obvious that Intel were determined to pay AMD back for the years of ass-kickings it had endured at the hands of the Athlon 64, by releasing the Core 2 Quad only five months after the Core 2 Duo had turned the performance tables. The Phenom was still some ways off, so AMD responded with the Quad FX, a consumer-oriented dual-processor platform that could mount two dual-core chips (branded as Athlon 64s, but actually rebadged Opteron server chips). While repurposing Opterons for desktop use was something that had worked magnificently three years prior, this time it became obvious that AMD Didn't Think This Through - not only was this set-up more expensive than a Core 2 Quad (the CPUs and motherboard worked out to about the same price, but you needed twice the memory modules, a more powerful PSU, and a copy of Windows XP Professional), but it generally wasn't any faster, and in anything that didn't use all four cores actually tended to be far slower, as Windows XP had no idea how to deal with the two memory pools created by the dual-CPU set-up (Vista was a lot more adept in that department, but had its own set of problems).
  • Amazingly enough, things got worse when the Phenom eventually did arrive on the scene. In addition to being clocked far too slow to compete with the Core 2 Quad - which wasn't really due to any particular design flaw, other than its native quad-core design being a little Awesome, but Impractical - it turned out that there was a major problem with the chip's translation lookaside buffer (TLB), which could lead to crashes and/or data corruption in certain rare circumstances. Instead of either initiating a full recall or banking on the fact that 98% of users would never encounter this bug, AMD chose a somewhat odd third option and issued a BIOS patch that disabled the TLB altogether, crippling the chip's performance. They soon released Phenoms that didn't have the problem at all, but any slim hope of it succeeding went up in smoke after this fiasco.
    • Things got better with Phenom II, which improved dramatically the performance, going near Intel once again and even more, their 6-core chips were good enough to get some buyers (and defeating the first generation of Core i7 in some uses), indicating that the original Phenom was an otherwise-sound design brought down by poor clock speeds, the TLB glitch, and not enough cache. Which is still more than can be said for the next major design...
  • AMD's Bulldozer was something that may make people wonder why they went this route. On the surface, AMD made two integer cores share a floating point unit. This makes some sense, as most operations are integer-based. Except those cores share an instruction decoder and scheduler, effectively making a single core with two disjointed pools of execution units. Also, each integer core was weaker than the Phenom II's core. To make matters worse, they also adopted a deep pipeline and high clock frequencies. If anyone paid attention to processor history, those two reasons were the root cause in why the Pentium 4 failed. Still, it was forgivable since they used more cores (up to 8 threads in 4 modules) and higher clock speeds than Intel in order to compensate, making it at least useful in some occasions (like video editing or virtual machines).
    • However, it went downhill with the Carrizo, the last major family based on the Bulldozer lineage. It cut the L2 cache which gives enough performance to not to be outmatched by Intel Haswell and they got stuck in 28 nm process, making it worse. Even worse? Builders of laptops (which the Carrizo was intended for) decided to use the worst designs for them, taking their performance to near Intel Nehalem levels, which was outdated by six years. One could get the impression that AMD simply didn't care about the Bulldozer family by this point anymore, and just quickly shoved Carrizo out the door so as not to waste the R&D money, while also establishing the Socket AM4 infrastructure that their next family, Ryzen (which got them back on track and then some), would use.
  • The initial batch of the RX 480 graphics cards had a total board power target of 150W, 75 of which could be sourced from the slot itself which is part of the PCI-Express spec note . The problem came about when some of the cards began drawing more power than the spec allowed due to board manufacturers factory overclocking the things. This caused in some cases, cards to burn the slot out in motherboards that weren't overbuilt. A driver fix came out later to limit how much power the card uses to avoid drawing too much power from the slot.
  • When AMD was ready to release the RX 5600 and RX 5600 XT graphics cards, NVIDIA had just price cut their midrange RTX 2060 card. In response, AMD at the last minute bumped up the official clock speeds for the card. While this sounds like a good idea, in reality the board manufacturers were a little more than upset because this meant that not only did they have to figure out how to reflash the firmware on cards potentially already out for sale (otherwise they'd run at the slower speeds), but they may have needed time to re-verify their designs to make sure the new clock speeds actually worked well.
  • Back when Ryzen was anounced, AMD promised that the new socket to go with the CPU line, Socket AM4, would be supported until 2020. Originally AMD planned on having Zen 3, the third generation of the architecture Ryzen was based on, be the forerunner of a new socket and chipset with a planned launch time of 2020. However, people took "until 2020" to mean "AM4 will support Zen 3 because it's being released in 2020." This caused a slew of issues of which AM4 boards could support the new chips. To make matters worse, older AM4 boards had a firmware space of 16MB due to a limitation in the first and second generation Ryzen processors. Newer AM4 boards typically came with 32MB or more for firmware. This meant that in order to support Zen 3, older AM4 boards had to drop support for older Zen based processors. To make matters worse, AMD's firmware has a one-way upgrade system. If the firmware is upgraded past a certain point, the firmware can't be rolled back. So if someone updated an older AM4 board, it won't be compatible with older Zen processors which would be a problem if that person wants to recycle the board by pairing it with a (presumably) cheaper older Zen part.
  • For some reason, AMD releases lower-mirdange to lower end cards with fewer than 16 PCI Express lanes. This alone might not be a problem, but it is on video cards with not enough VRAM to contain the data the GPU can actually process.
    • For example, the RX 5500 XT used 8 lanes and came in 4GB and 8GB flavors. It was found when the 4GB version runs out of VRAM, performance starts tanking heavily if the card isn't on a PCI Express 4.0 slot. The problem is at the time of release, PCI Express 4.0 was bleeding edge (and hence compatible parts were more expensive) and this was a budget oriented card.
    • When AMD announced the RX 6500 XT, they further reduced the lane count to 4 and there's only a 4GB VRAM version. A reviewer tested the RX 5500 XT with 4 lanes to simulate how bad this could be and found performance drops by a staggering ~40%. While PCI Express 4.0 has had wider adoption at the time of its release, the card is more attractive to people with older systems that have 3.0 slots. When the card launched, reviewers were not kind to it, noting that the card it was meant to replace, the RX 5500 XT, could still beat it and it even loses to NVIDIA's previous generation card in the similar market category. It also lacked certain multimedia features such as AV1 decoding support. "Technical staff" from AMD went onto forums to explain that the RX 6500 XT was meant to be used in laptops, pairing it with one of their upcoming APUs that included the missing features, with the strong inference that it was only released on desktops due to the supply issues facing the market.note  Even worse, because the GPU can only support two VRAM chips, there was little if any chance for an 8GB version since 2GB chips were the maximum size available with 4GB chips not on anyone's roadmap.


Multiple companies:

    Computers and smartphones 
  • On older laptops, doing things like adjusting volume or screen brightness would require you to hold the "fn" key and press a certain function key at the same time. Since the "fn" key is normally at the bottom of the keyboard and the function keys are normally at the top, this could get annoying or uncomfortable, so most companies now do the opposite: pressing a function key on its own will adjust volume or brightness, and users who actually want to press the function key can do so by holding down the "fn" key. Since most people adjust the volume or brightness much more often than they need to press F6, this is nice. However, the buttons do different things depending on the brand (controlling volume might be F2 and F3 on one brand, and F4 and D5 on another). Additionally, most desktop keyboards don't do this at all, so switching computers means that an entire row of the keyboard works differently. Disabling this on laptops requires changing a setting in the UEFI, which doesn't support screen readers or other accessibility features.
  • The Coleco Adam, a 1983 computer based on the fairly successful ColecoVision console, suffered from a host of problems and baffling design decisions. Among the faults were the use of a proprietary tape drive which was prone to failure, locating the whole system's power supply in the printer (meaning the very limited daisy-wheel printer couldn't be replaced, and if it broke down or was absent, the whole computer was rendered unusable), and poor electromagnetic shielding which could lead to tapes and disks being erased at startup. Even after revised models ironed out the worst bugs, the system was discontinued after less than 2 years and sales of 100,000 units.
  • The Samsung Galaxy Note 7, released in August 2016 and discontinued just two months later. On the surface, it was a great cell phone that competed with any number of comparable phablets. The problem? It was rushed to market to beat Apple's upcoming iPhone 7 (which came out the following month), and this left it with a serious problem: namely, that it had a habit of spontaneously combusting. Samsung hastily recalled the phone in September once it started causing dozens of fires (to the point where aviation safety authorities were telling people not to bring them onto planes), and gave buyers replacements with batteries from a different supplier. When those phones started catching fire as well, it became obvious that the problems had nothing to do with quality control and ran to the heart of the phone's design note . By the time that Samsung discontinued the Galaxy Note 7, it had already become the Ford Pinto of smartphones and a worldwide joke, with every major wireless carrier in the US having already pulled them from sale. Samsung especially doesn't want to be reminded of it, to the point that they ordered YouTube to take down any video showing a mod for Grand Theft Auto V that reskins the Sticky Bombs into the Galaxy Note 7.
  • The Google Nexus 6P, made by Huawei, has multiple major design flaws that make accidental damage from sitting on it potentially catastrophic: the back isn't screwed into the structure of the phone even though it's not intended to be removable, leaving the thin aluminum on the sides and a slab of Gorilla Glass 4 not designed for structural integrity to hold the phone together; there's a gap between the battery and motherboard right by the power button that creates a weak point; and the plastic and metal are held together with dovetail joints, which are intended for woodworking. note  Zack Nelson, testing the phone for his YouTube channel JerryRigEverything, was able to destroy both Nexus 6Ps he tested immediately.
  • The 2011 Samsung Galaxy Ace S5830 attracted a good few customers due to its low price and decent specs for the time; however, widespread dissatisfaction followed due to a cripplingly tiny 158 megabytes of internal storage. Users would find that just installing the most widely-used apps - Whatsapp, Facebook, and Messenger - would fill up all available storage. Samsung bundled a 2GB microSD card with the phone and you could move some application data to it using third-party hacks, but you couldn't move all storage to it, so it was a temporary fix that might let you install an app or two more - then you'd run out again. Fixing it properly was most definitely not a newbie-friendly operation: it entailed rooting and flashing a special hack that would integrate the SD card's filesystem to the phone's, making Android think they were one and the same. This worked to make the phone usable, but at the price of speed and complete dependence on the SD card, which, if it got corrupted or lost, would render the phone unusable without further hacking. It was one of Samsung's most hated models and occasionally suffers ridicule even now.
    • The 2013 Mito A300, released only in Indonesia as a look-alike made by a local electronics company, would have (on top of inaccurate touchscreen, lack of 3G support, and low packaged battery power) the same crippingly tiny storage problem as Galaxy Ace as well, however instead of Galaxy Ace's Android 2.3, it ran on Android 4.0 and at least able to natively move some apps into microSD card, though not all apps are able to be moved over.
  • The Dell Inspiron line was somewhat known for having very faulty speakers. These usually boil down to two issues:
    • The ribbon cable that connects the speakers to the motherboard is very frail, and often comes loose if you move your device too much (the device in question being a laptop designed for portability).
    • The headphone jack has a physical switch that turns off the speakers. If you plug something into the jack, the speakers turn off; unplug it, and they turn on. Simple, right? Well, this switch is bound to get stuck at one point or another, and the only way to even get to it is to disassemble the damn thing, or by spraying contact cleaner in the port and hoping that the air pressure flips the switch.
  • The Dell XPS line has also been known for its fair share of design failures.
    • The Dell XPS 13 9350/9360 has 4 NVME lanes going to the m.2 Slot and 2 going to the Thunderbolt 3 port- which means it sacrificed Thunderbolt bandwidth for better speed on the SSD. This would be a fair tradeoff had Dell not decided to put the M.2 lanes in Power Saving mode, crippling its performance. The user cannot do anything about this.
    • The Dell XPS 13 2-in-1 looked good on the surface- until people realized that in the pursuit of thinness, Dell had soldered the SSD down to the board, rendering the laptop a brick if it ever failed.
    • The 2020 XPS lineup suffered from a laundry list of issues, ranging from premature GPU Failure, screen bleeding to inexplicable trackpad problems. Other issues include review units arriving with dead webcams, several reports of dead audio jacks, broken keyboards out of the box- it got so bad people began posting to celebrate the fact that their $1000+ laptop had arrived without any major QC Failures.
    • Many XPSes arrived with loose trackpads. The solution was to dismantle the laptop and tighten a few screws.
  • Some models of the Acer Aspire One had the right speaker mounted in such a way that its vibrations would cause the hard disk at best to almost halt and at worst bad sectors and corrupt partitions.
  • The Samsung Galaxy Fold would have been released in April 2019 had catastrophic teething problems not come to light just days before release. On the surface, it was revolutionary: a smartphone that could be folded up like the flip phones of the '90s and 2000s, allowing a pocket-sized device to have a 7.3-inch screen like a small tablet computer. Unfortunately, reviewers' $1,980 Galaxy Note phones started breaking just a few days after they got them, one of the worst possible first impressions for foldable smartphones. Many of the problems could be traced to people removing the protective film on the screen - a film that, in a true case of this trope, looked almost exactly like the thin plastic screen protectors that normally ship on the screens of other smartphones and tablets to keep them from getting scratched or accumulating dust during transport, leaving many to wonder just why Samsung not only made a necessary part of the phone so easy to remove, but made it look so similar to a part of other phones that is designed to be removed before use.
  • VIA processors of years past are almost worth a place on this list due solely to their absolutely abysmal performance - at the time they competed with Pentium IIIs, and routinely lost to them despite having 2-3 times as many MHz - but what truly sets the VIA C3 apart is that there is a second, completely undocumented, non-x86 core that has complete control over the security system. It has never been actually used by any known appliance and no official instructions exist on how to access it - making one wonder why exactly is it there in the first place - but in 2018 a curious hacker found out about it in VIA's patents. He investigated, managed to activate it, and thus gave the VIA C3 the dubious prize of being the first processor ever that lets any unauthorised user take complete control of the system without using any bugs or exploits, simply by utilising resources put there by the manufacturer. To compound the issue, the C3 was often used in point-of-sale terminals and ATMs, juicy targets for exactly this sort of manipulation. The only reason this didn't turn into a worldwide scandal is that the processor is so old it's been superseded everywhere that matters.
  • The Toshiba Satellite A205 was a mess. While it was one of the few laptops that could competently run Windows Vista, it had its own issues on the hardware side. Firstly, both its battery and AC adapter were prone to going bad after just a few months - either the battery stopped holding a charge altogether, or the AC adapter stopped working all of sudden. The other problem it had was the ISA slot for the hard drive. If you were planning on swapping it out, better hope you got one that's exactly the right size, because if it's too small, it'll just slide off if you tilt the device ever so slightly.
  • The Ayaneo 2s is one of the many Steam Deck clones that has been made recently, however it has one glaring problem. When the device heats up (which it often does when gaming,) not only does it become almost too hot to touch, the excessive heat can also cause light bleed issues with the LCD display, most likely due to the heat softening the glue that holds the diffusion layer in place. A more in-depth review, as well as a more thorough demonstration of the issue can be found here
    • The Asus ROG Ally also has another issue related to excess heat. This time the SD card reader can, when the device is under load, slow transfer speeds to a crawl, that is if the card is recognized at all. Asus acknowledged the issue and quickly released a software update to tweak fan speeds to hopefully alleviate the issue.
  • The Lenovo Yoga folding tablet/laptop seems to have problems no matter what model you buy, but the 720-13IKB probably has it the worst with its power button. Being on the side, the power button interfaces with a small piece that is manipulated to press into the motherboard, which sends the signal to turn the laptop on and off. The problem is that this small piece is made of cheap aluminum, attached to a similarly-cheap plastic mounting piece, making perhaps the most important button on the laptop ridiculously easy to break just from regular use. Even worse is that Lenovo apparently foresaw this problem and included a backup reset button on the other side of the laptop - but they apparently couldn't foresee why this would be a problem, because this reset button uses the same cheap design and materials as the main power button, making it just as likely to break.

    Computer Hardware and Peripherals 
  • Famously, the "PC LOAD LETTER" message you'd get on early HP Laserjets has been elevated as an example of confusion in user interfaces. Anyone without prior knowledge would assume something is wrong with the connection to the PC, or something is off in the transfer of data ("load" being interpreted as "upload"), and that the printer is refusing the "letter" they're trying to print. What it actually means is "load letter-sized paper into paper cassette"; why the printer wasn't simply programmed to say "OUT OF PAPER" is a Riddle for the Ages.
  • Some HP computers come with batteries or power supply units that are known to explode. Literally, with no exaggeration, they release sparks and smoke (and this is a "known issue"). Others overheat and burst into flames. And there have been multiple recalls, proving that they obviously didn't learn from the first one.
  • The infamous A20 line. Due to the quirk in how its addressing system worked note , Intel's 8088/86 CPUs could theoretically address slightly more than their advertised 1 MB. But because they physically still had only 20 address pins, the resulting address just wrapped over, so the last 64K of memory actually was the same as the first. Some early programmers note  were, unsurprisingly, stupid enough to use this almost-not-a-bug as a feature. So, when the 24-bit 80286 rolled in, a problem arose - nothing wrapped anymore. In a truly stellar example of a "compatibility is God" thinking, IBM engineers couldn't think up anything better than to simply block the offending 21st pin (the aforementioned A20 line) on the motherboard side, making the 286 unable to use a solid chunk of its memory above 1 meg until this switch was turned on. This might have been an acceptable (if very clumsy) solution had IBM defaulted to having the A20 line enabled and provided an option to disable it when needed, but instead they decided to have it always turned off unless the OS specifically enables it. By the 386 times, no sane programmer used that "wrapping up" trick any more, but turning the A20 line on is still among the very first things any PC OS has to do. It wasn't until Intel introduced the Core i7 in 2008 that they finally decided "screw it" and locked the A20 line into being permanently enabled.
  • Qualcomm had their own share of failure: The Snapdragon 808 and 810 were very powerful chips at the time (2015) since they were based on the high performance ARM A57 design, but it had a very important disadvantage: it overheats to the point to make it throttle and lose performance! And three terminals got hit hard with this: the LG G4 (with Snapdragon 808), becoming infamous since it dies after just one year; the HTC M9 (with Snapdragon 810), which became infamous for overheating a lot; and the Sony Xperia Z5, for the same reasons as the M9. No wonder why the rest of the competition (Hisilicon and Mediatek) avoided the ARM A57 design.
  • The iRex Digital Reader 1000 had a truly beautiful full-A4 eInk display... but was otherwise completely useless as a digital reader. It could take more than a minute to boot up, between 5 and 30 seconds to change between pages of a PDF document, and could damage the memory card inserted into it. Also, if the battery drained all the way to nothing, starting to charge it again would cause such a current draw that it would fail to charge (and cause power faults) on any device other than a direct USB-to-mains connector, which was not supplied with the hardware.
  • Motorola is one of the most ubiquitous producers of commercial two-way radios, so you'd think they'd ironed out issues. Nope, there's a bunch.
    • The MTX 9000 line (the "brick" radios) were generally made of Nokiamantium, but they had a huge flaw in the battery clips. The battery was held at the bottom by two flimsy plastic claws and the clips at the top were just slightly thicker than cellophane, meaning that the batteries quickly became impossible to hold in without buying a very tight-fitting holster or wrapping rubber bands around it.
    • The software to program virtually any Motorola radio, even newer ones, is absolutely ancient. You can only connect via serial port. An actual serial port - USB to serial adapter generally won't work. And the system it's running on has to be stone age (Pentium Is from 1993 are generally too fast), meaning that in most radio shops there's a 486 in the corner just for programming them. Even the new XPR line can't generally be programmed with a computer made after 2005 or so.
      • If you can't find a 486 computer, there's a build of DOS Box floating around ham circles with beefed-up code to slow down the environment even more than is possible by default. MTXs were very popular for 900MHz work because, aside from the battery issue, they were tough and cheap to get because of all the public agencies and companies that sold them off in bulk.
  • VESA Local Bus. Cards were very long and hard to insert because they needed two ports: the standard ISA and an additional 32-bit bus hardwired to the 486 processor, which caused huge instability and incompatibility problems. Things could get worse if a non-graphic expansion card (usually IO ports) was installed next to a video card, which could result in crashes when games using SVGA graphics accessed the hard drive. The multiple clock frequencies involved imposed high standards on the construction of the cards in order to avoid further issues. All these problems eventually caused the 486-bus-dependent VLB to be replaced by PCI, starting from late-development 486 boards onwards into the Pentium era.
    • Proving that lightning could in fact strike the same place twice, however, was the invention of PCI-X (PCI Extended) almost decade later. Which has the same idiot design as VLB - namely, a standard PCI 2.3 slot and an additional second slot behind it. Needless to say, PCI-X cards were bulky and huge and has its own myriad of issues, including one that states that the bus shall run at the speed of the slowest card on the motherboard. Thankfully the competing standard, PCI Express (PCIe), won.
  • The Radio Shack TRS-80 (model 1) had its share of hardware defects:
    • The timing loop constant in the keyboard debounce routine was too small. This caused the keys to "bounce" - one keypress would sometimes result in 2 of that character being input.
    • The timing loop constant in the tape input routine was wrong. This made the volume setting on the cassette player extremely critical. This problem could somewhat be alleviated by placing an AM radio next to the computer and tuning it to the RFI generated by the tape input circuit, then adjusting the volume control on the tape player for the purest tone from the radio. Radio Shack eventually offered a free hardware modification that conditioned the signal from the tape player to make the volume setting less critical.
    • Instead of using an off-the-shelf Character Generator chip in the video circuit, RS had a custom CG chip programmed, with arrow characters instead of 4 of the least-used ASCII characters. But they made a mistake and positioned the lowercase "a" at the top of the character cell instead of at the baseline. Instead of wasting the initial production run of chips and ordering new chips, they eliminated one of the video-memory chips, added some gates to "fold" the lowercase characters into the uppercase characters, and modified the video driver software to accommodate this. Hobbyists with electronics skills were able to add the missing video memory chip, disconnect the added gates, and patch the video driver software to properly display lowercase, albeit with "flying a's". The software patch would have to be reloaded every time the computer was booted. Radio Shack eventually came out with an "official" version of this mod which included a correctly-programmed CG chip.
    • The biggest flaw in the Model 1 was the lack of gold plating on the edge connector for the Expansion Interface. Two-thirds of the RAM in a fully-expanded TRS-80 was in the EI, and the bare copper contact fingers on the edge connector oxidized readily, resulting in very unreliable operation. It was often necessary to shut off the computer and clean the contacts several times per day. At least one vendor offered a "gold plug", which was a properly gold-plated edge connector which could be soldered onto the original edge connector, eliminating this problem.
    • In addition, the motherboard-to-EI cable was very sensitive to noise and signal degradation, which also tended to cause random crashes and reboots. RS attempted to fix this by using a "buffered cable" to connect the EI to the computer. It helped some, but not enough. They then tried separating the 3 critical memory-timing signals into a separate shielded cable (the "DIN plug" cable), but this still wasn't enough. They eventually redesigned the EI circuit board to use only 1 memory timing signal, but that caused problems for some of the unofficial "speed-up" mods that were becoming popular with hobbyists.
    • The Floppy Disk Controller chip used in the Model I EI could only read and write Single Density disks. Soon afterwards a new FDC chip became available which could read and write Double Density (a more efficient encoding method that packs 80% more data in the same space). The new FDC chip was almost pin-compatible with the old one, but not quite. One of the values written to the header of each data sector on the disk was a 2-bit value called the "Data Address Mark". Two pins on the single-density FDC chip were used to specify this value. As there were no spare pins available on the DD FDC chip, one of these pins was reassigned as the "density select" pin. Therefore the DD FDC chip could only write the first two of the four possible DAM values. Guess which value TRS-DOS used? Several companies (starting with Percom, and eventually even Radio Shack themselves) offered "doubler" adapters - a small circuit board containing sockets for both FDC chips! To install the doubler, you had to remove the SD FDC chip from the EI, plug it into the empty socket on the doubler PCB, then plug the doubler into the vacated FDC socket in the EI. Logic on the doubler board would select the correct FDC chip.
  • The TRS-80 model II (a "business" computer using 8-inch floppy disks) had a built-in video monitor with a truly fatal flaw: the sweep signals used to deflect the electron beam in the CRT were generated from a programmable timer chip. When the computer booted, one of the first things it would do is write the correct timer constants to the CRTC chip. However, an errant program could accidentally write any other values to the CRTC chip, which would throw the sweep frequencies way off. The horizontal sweep circuit was designed to operate properly at just one frequency and will "send up smoke signals" if operated at a frequency significantly different than what it was designed to operate at. If your screen goes blank and you hear a loud high-pitched whine from the computer, shut the power off immediately, as it only takes a few seconds to destroy some rather expensive components in the monitor.
  • NVIDIA has had a checkered history:
    • Nvidia's early history is interesting - in the same way a train wreck is. There's a reason why their first 3D chipset, the NV1, barely gets a passing note in the official company history page. See, the NV1 was a weird chip which they put on an oddball - even for the time - hybrid card meant to let you play games ported from the Sega Saturn on the PC; this was no coincidence, as the Saturn itself had a related but earlier video adapter. The chip's weirdness came from its use of quadrilateralnote  primitives; the rest of the 3D world used triangle primitives, which are so much easier to handle that nobody else has deviated from them to this day. Developing for the quad-supporting chip was complicated, unintuitive, and time-consuming, as was porting triangle-based games from other platforms, so the NV1 was wildly unpopular from the start. Additionally, the hybrid Nvidia cards integrated a sound card with full MIDI playback capability and a pair of gameports that converted Saturn controllers to the PC, and that increased cost and complexity. Also, the board's sound codec was no better than a SoundBlaster (the then-standard for PC audio) clone, and the sound portion of the card overall could not work reliably in MS-DOS... While MS-DOS games were still a thing, CD-ROMs often contained both the DOS and Windows 9x version of a game, and many people were actively booting back to DOS because if there were two versions of the game, the DOS version oftentimes just ran better than the Windows version for various reasons. When Microsoft came out with Direct3D it effectively killed the NV1, as it was all but incompatible with it. Nvidia stubbornly went on to design the NV2, still with quad mapping, intending to put it in the Dreamcast - but then Sega saw the writing on the wall, told Nvidia "thanks but no thanks" and went on to also evaluate GPUs with triangle polygonsnote . Nvidia finally saw the light, dropped quads altogether and came out with the triangle-primitive-based Riva 128, which was a decent hit and propelled them back onto the scene - probably with great sighs of relief from the shareholders.
    • When it came time to launch the GeForce 4, NVIDIA wanted to cater to both the high-end and mainstream/budget markets like they did with the GeForce 2 series. So they launched the flagship GeForce 4 Ti series and the mainstream GeForce 4 MX series. The problem was that the GeForce 4 MX series were nothing more than souped up GeForce 2 MX GPUs, which at the time was an aging budget GPU line. Since both the Ti and MX were called GeForce 4, understandably consumers were upset to find the MX series was really lacking. NVIDIA decided from here on out to not do this again (at least to such a noticeable degree), except their first attempt at this was... well, lacking.
    • NVIDIA's entry into the DirectX 9 API, the GeForce FX was rife with a slew of problems. The first was its design wasn't as fundamentally good as the competing ATi GPU at the time, the Radeon R300 series, which meant that the GPU had to run faster to make up for the loss in raw performance. The second was that NVIDIA optimized the architecture for 16-bit floating point, rather than 24-bit that the DirectX 9 standard required at the time. And lastly, there were issues with the 130nm process being used, resulting in lower yields and less than expected performance. The overall result was a GPU that couldn't really compete at the same level as the R300 and was less efficient at doing it. Adding insult to injury, to avoid the disjointed feature set fiasco of the GeForce 4 and GeForce 4 MX, the entire Geforce FX product stack had the same feature set. Which is fine and all, until someone tried to run a DirectX 9 game on a FX 5200 and ended up with seconds-per-frame performance.
  • Improvements in low-power processor manufacture by Intel - namely the Bay Trail-T system-on-a-chip architecture - have now made it possible to manufacture an honest-to-goodness x86 computer running full-blown Windows 8.1 and with moderate gaming capabilities in a box the size of a book. Cue a whole lot of confounded Chinese manufacturers using the same design standards they used on ARM systems-on-a-chip to build Intel ones, sometimes using cases with nary a single air hole and often emphasizing the lack of need for bulky heatsinks and noisy fans. Problem: You do actually need heat sinking on Intel SoCs, especially if you're going to pump them for all the performance they're capable of (which you will, if you use them for gaming or high-res video playback). Without a finned heatsink and/or fan moving air around, they'll just throttle down to crawling speed and frustrate the users.
  • Back in the early days of 3D Graphics cards, when they were called 3D Accelerators and companies like 3Dfx hadn't found their stride, there was the S3 Virge. The card had good 2D performance, but such a weak 3D chip that at least one reviewer called it, with good reason, the world's first 3D Decelerator. That epithet is Exactly What It Says on the Tin, as 3D games performed worse on PCs with an S3 Virge installed than they did in software mode, i.e. with no 3D acceleration at all.
  • The "Home Hub" series of routers provided by UK telecom giant BT are fairly capable devices for the most part, especially considering that they usually come free to new customers. Unfortunately, they suffer from a serious flaw in that they expect to be able to use Wi-Fi channels 1, 5, or 11, which are very crowded considering the ubiquity of home Wi-Fi, and BT's routers in particular. And when that happens, the routers will endlessly rescan in an effort to get better speeds, knocking out your internet connection for 10-30 seconds every 20 minutes or so. Sure, you can manually force the router into using another, uncongested channel... except that it'll keep rescanning based on how congested channels 1, 5, and 11 are, even if there are no devices whatsoever on the channel that you set manually. Even BT's own advice is to use ethernet (and a powerline adapter if needed) for anything that you actually need a rock-solid connection on.
  • Wireless mice still seem to have certain design flaws nobody seems particularly willing to fix. One particular wide-spread issue is that the power switch for a wireless mouse is, without exception, on the bottom of the mouse body - the part that is always grinding against the surface you use the mouse on, and as such unless it is recessed far enough into the mouse will constantly jiggle the power switch, thus messing with your mouse movement; especially ironic if the mouse advertises itself as a "gaming" device, then interferes with your attempts to actually play games with it. Rechargeable ones can get even worse, as most insist on setting up the full assembly in a way that makes it impossible to use the mouse while it's charging, such as the already-mentioned Apple Magic Mouse 2. Then you get to some models, like Microsoft's Rechargeable Laser Mouse 7000, which get even worse. On top of both of the aforementioned issues, it's designed in such a way that the battery has to depress a small button in the battery compartment for the charger to actually supply power to it. As it turns out, the proprietary rechargeable battery that comes with the mouse, for some reason, is slightly thinner in diameter than a regular AAA battery, meaning it doesn't depress the button, requiring you to either wrap some sort of material around the battery at the contact point to get it to actually charge or eschew the "rechargeable" bit and just use a regular AAA battery, replacing it as necessary.
  • The Ion Party Float speaker had a subjectively good audio for its price range and good portability, but it had issues with audio breaking up/lagging in recordings with silences in them. This could fortunately be circumvented by using the 3.5mm audio jack. However, multiple users who used it as an actual pool float reported that the unit could start to malfunction and actually using in this way can be somewhat of a hassle as when the battery is drained, it needs to be dried well before recharging according to the manufacturer. There were also reports of the battery failing prematurely, from the system refusing to allow it to charge. Fortunately, their successor products corrected issues such as the audio lag, while the Party Float will work fine if kept out of water and only used in dry places which seems to be what damages the battery system.
  • On August 16, 2022, multiple networking devices that implemented the Realtek RTL819xD system-on-a-chip were found vulnerable to being remotely crashed, remote arbitrary code execution, establishment of backdoors, and tampering with network traffic, due to widespread adoption of the Realtek system-on-a-chip into multiple brands of equipment. While hardly surprising that a security hole was eventually found for the system, the amount of widespread adoption left multiple companies scrambling to implement the Realtek patches for their products and network administrators scrambling to secure their networks against potential worms seizing control of their systems. A major hitch is that not all manufacturers necessarily bothered to release patches for their equipment, meaning some admins were forced to replace the affected gear.
  • An ongoing problem for Intel and AMD is what default settings motherboard manufacters use with regards to performancing boosting and how they build the circuitry to support higher power loads. For instance:
    • Intel specifies in their more recent processors two power levels: a short duration, high power boost and an infinite duration, lower power boost. The duration of the high power boost is configurable, meaning some motherboard manufacturers simply set that value to effectively infinity, causing the CPU to try and boost as hard as possible longer than it was designed to do.
    • When AMD's Ryzen 7800X3D processors were literally blowing up, analysis suggested that motherboard manufacturers were, with varying degrees, partly to blame. One manufacturer in particular was found to have an overcurrent protection chip (which would've helped prevent the CPU from blowing up) which was set too high for too long, and this was on a board approaching $1000.
  • HP keyboards and mice come with a somewhat jarring and inexcusable problem that causes the former to glitch (e.g. delay or not register keypresses or releases) and the latter to jitter. The company really can't catch a break.

    Mass-Storage Devices 
  • The Commodore 64, one of the most popular computers of all-time, wasn't without its share of problems. Perhaps the most widely-known is its extreme slowness at loading programs. This couldn't really be helped with a storage medium like tape, which remained slow even after various clever solutions to speed it up, but floppy disks really ought to have been faster. What happened was that Commodore had devised a hardware-accelerated system for transferring data that worked fairly well, but then also found a hardware bug in the input/output chip that made it not work at all. Replacing the buggy chips was economically unfeasible, so the whole thing was revised to work entirely in software. This slowed down drive access immensely and caused the birth of a cottage industry for speeder carts, replacement ROM chips, and fastloaders, most of which sped things up at least fivefold. Additionally, the drive itself had a CPU and some RAM to spare - effectively a secondary computer dedicated to the sole task of feeding data to the primary computer (hence its phenomenal cost) - so it was programmable, and people came up with their own ways to improve things further. Eventually, non-standard formats were developed that loaded programs 25 times faster than normal.
  • Why, after the introduction of integrated controllers into every other storage device, does the floppy have to be controlled by the motherboard? Sure, it makes the floppy drive simpler to manufacture, but you're left with a motherboard that only knows how to operate a spinning mass of magnetic material. Try making a floppy "emulator" that actually uses flash storage, and you'll run into this nigh-impassible obstacle.
    • The floppy drive interface design made sense when it was designed (the first PC hard drives also used a similar interface) and was later kept for backwards-compatibility. However, a lot of motherboards also support IDE floppy drives (there may not have been any actual IDE floppy drives, but a LS120 drive identifies itself as a floppy drive and can read regular 3.5" floppy disks), a SCSI or USB device can also identify as floppy drive. On the other hand, the floppy interface is quite simple if you want to make your own floppy drive emulator - such as the Gotek Floppy Emulator.
  • Sony's HiFD "floptical" drive system. The Zip Drive and the LS-120 Superdrive had already attempted to displace the aging 1.44MB floppy, but many predicted that the HiFD would be the real deal. At least until it turned out that Sony had utterly screwed up the HiFD's write head design, which caused performance degradation, hard crashes, data corruption, and all sorts of other nasty problems. They took the drive off the market, then bought it back a year later... in a new 200MB version that was incompatible with disks used by the original 150MB version (and 720KB floppies as well), since the original HiFD design was so badly messed up that they couldn't maintain compatibility and make the succeeding version actually work. Sony has made a lot of weird, proprietary formats that have failed to take off for whatever reason, but the HiFD has to go down as the worst of the lot.
  • The IBM Deskstar 75GXP, nicknamed the "Deathstar". While it was a large drive at the time (ranging from 15 to 75 gigabytes in 2000), it had a disturbing habit of suddenly failing, taking your data with it. The magnetic coating was of subpar reliability and came loose easily, causing head crashing that easily strips the magnetic layer off clean. One user with a RAID server setup reported to their RAID controller manufacturer; supposedly, this user was replacing their IBM Deskstars at a rate of 600-800 drives per day. There have been many hard drives that have been criticized for various reasons, but the "Death Star" was something truly spectacular for all the wrong reasons.
    • There is anecdotal evidence that IBM was engaging in deception, knowingly selling faulty products, and then spewing out rhetoric about the industry-standard failure rates of hard drives. This denial strategy started a chain reaction that led to a demise in customer confidence. Class-action lawsuits helped convince IBM to sell their hard drive division to Hitachi in 2002, and possibly to exit the consumer market altogether in 2006 by selling their consumer division to Lenovo.
  • The Iomega Zip Disk was a big success undeniably, but user confidence in the drives' reliability was terrorized by the "Click-of-death". Though tens of millions of the drives were sold, there were thousands of drives that would suffer misalignment and damage the medium injected into the drive. This would not be horrible by itself necessarily, but Iomega made a big mistake in downplaying the users who complained about drive failures and failing to be sensitive about their lost data.

    The Zip's worst problem wasn't even the fact that it could fail and potentially ruin a disk, but that such a ruined disk would go on to ruin whatever drive it was then inserted into. Which would then ruin more disks, which would ruin more drives, etc. Effectively a sort of hardware virus, it turned one of the best selling points of the platform (inter-drive compatibility) into its worst point of failure.

    The biggest kicker? The entire issue was due to the removal of a foam O-ring from the design by a marketer, to save a few pennies. When the head could not read the data on the disk, it would eject out of the disk to return to starting position and then attempt to read the disk again. The O-ring was to prevent the head from knocking against the edge of the mechanism and becoming malformed, thus damaging the disk the next time it tries to read once it has knocked against the edge of the track a few more times.

    After a class-action lawsuit in 1998, Iomega issued a free replacement program, and further rebates in 2001 for future products. It was too little, too late, and CD-R disks were now more popular for mass storage and perceived as more reliable. The New Zealand site for PC World has the original article still available. Surprisingly however, Iomega would soldier on making two more generations of ZIP disk drives before leaving the market in the late 2000s.
    • Iomega's previous magnetic storage product, Bernoulli Box, was designed to avert this disaster by using a specific law of physics that makes it physically impossible for the drive head to make contact with the medium. Yes, Iomega already had a disk format that was designed to prevent the failures the Zip drives suffered. When they designed the Zip disk specification, the Bernoulli effect was overlooked, likely to save costs.
    • One more idiot move - Iomega decided that "if you can't beat them, join them" in the early 2000s and released the Zip CD line of CD burners. However, due to bad luck, they unknowingly sourced batches of bad drives from Philips. This resulted in more coasters than you could've gotten over several years' worth of AOL free trial discs in your mailbox - apparently their QC department weren't doing their jobs. Another scathing lawsuit and product replacement program later, they quickly flipped distributors to Plextor. Except that by then, CD technology was improving and newer CD-RWs could store 700MB (and later, 800MB) of data. On those Plextor drives, the extra 50MB-150MB is walled off and you can still only write 650MB of data using their drives when you could use that sweet extra space (150MB was a lot even in 2007) on other drives. This eventually caused them to be bought out by EMC Corp. Which later was restructured into a JV with conglomerate Lenovo.
  • Maxtor, now defunct, once sold a line of external hard drives under the OneTouch label. However, the USB 2.0 interface would often malfunction and corrupt the filesystem on the drive, rendering the data hard to recover. You were better off removing the drive enclosure and installing the disk on a spare SATA connection on a motherboard. Not surprisingly, Maxtor was having financial troubles already, before Seagate acquired them.
  • The 3M Superdisk and its proprietary 120MB "floptical" media were intended as direct competition to the Iomega Zip, but in order to penetrate a market that Iomega owned pretty tightly, the Superdisk needed a special feature to push it ahead. That feature was the possibility to write up to 32 megabytes on a bog-standard 1.44MB floppy, using lasers for alignment of the heads. Back then 32MB was significant storage, and people really liked the idea of recycling existing floppy stock - of which everybody had large amounts - into high-capacity media. The feature might just have given the Superdisk the edge it needed; unfortunately what wasn't immediately clear, nor explicitly stated, was that the drive was only able to do random writes on its specific 120MB disks. It could indeed write 32MB on floppies, but only if you rewrote all the data every time a change, no matter how small, was made - like a CD-RW disk with no packet-writing system. This took a relatively long time, and transformed the feature into an annoying gimmick. Disappointment ensued, and the format didn't even dent Iomega's empire before disappearing.
  • The Caleb UHD-144 was an attempt to gain a foothold in the floppy-replacement market. Unfortunately, it was ill-timed, the company not taking a hint from the failures of Sony and 3M - if anything, it was an example of a "good" idea being rushed to market without checking what it was being marketed against - so there was no chance to see the product in action. Inexpensive CD-R media and the Zip-250 (itself quickly marginalized by the cost-effectiveness of CD-R discs, which were designed to be read in optical drives that were already present in numerous computers) caused the technology to be dead on arrival.
  • Some DVD players, especially some off-brand models, seem to occasionally decide that the disc that you have inserted is not valid. The user ejects the disc and then injects it again and hopefully the DVD player decides to cooperate. This can be a headache if the machine is finicky about disc defects due to copy protection, or can't deal with your brand of DVD -/+ recordable disc that you use for your custom films. Bonus points if you have to crack a copy-protected disc to burn it onto a blank DVD because you can't watch the master copy. The inverse situation is also possible, where you have a DVD player made by a "reputable" brand that won't allow you to watch the locked-down DVD you just spent money for.
    • Some DVD players are overly cautious about the discs they're willing to play because of regional lockout. Live in Australia and have a legally-purchased Region 4 DVD? Turns out it was an NTSC disc, and your DVD player is only willing to play PAL discs. Oops.
  • After solid-state drives started taking over from mechanical hard drives as the storage device of choice for high-end users, it quickly became obvious that the transfer speeds would soon be bottlenecked by the speed of the Serial ATA standard, and that PCI Express was the obvious solution. Using it in the form of full-sized cards wasn't exactly optimal, though, and the smaller M.2 form factor is thermally limited and can be fiddly to install cards in. The chipset industry's answer was SATA Express, a clunky solution which required manufacturers to synchronise data transfers over two lanes of PCI Express and two SATA ports, standards with completely different ways of working. Just to make it even worse, the cable was an ugly mess consisting of four separate wires (two SATA, one PCI-E, and a SATA power connector that hung off the end of the cable). The end result was one of the most resounding failures of an industry standard in computing history, as a grand total of zero storage products made use of it (albeit a couple of manufacturers jury-rigged it into a way of connecting front-panel USB3.1 ports), with SSD manufacturers instead flocking to the SFF-8639 (later renamed U.2) connector, which was just four PCI-E lanes crammed into a simple cable.
  • To call the Kingston HyperX Fury RGB SSD a perfect example of form over function would be a lie by omission, as with this SSD form actively affects function. Kingston thought it would be a good idea to cram 75 LEDs into a 2.5-inch enclosure without insulating the storage aspect or, apparently, adequately testing the thermals, and the result is catastrophic. The heat from the LEDs - potentially over 70 degrees celsius - causes extreme thermal throttling that, as shown in this video, causes performance issues that can prevent programs from starting and cause the computer to hang on boot; the uploader also speculated that it could corrupt data. The thermal throttling can get so bad that a gigantic fan is needed to cool the drive enough to be able to turn the LED array off in software, at which point you might as well buy a normal SSD and leave the gimmicky RGB lighting separate from anything where performance is important. And before you ask "Well, why can't I just unplug the LEDs?", that just causes the thermonuclear reaction happening in your primary storage device to default to red and removes any control you have over it, because it is powered through the drive's power connector. Assuming it isn't powered by demons.

    Game Consoles - Atari 
  • Atari may be one of the first pioneers into setting the standard for future game video consoles, they have made their own share of mistakes:
    • While the Atari 5200 wasn't that poorly-designed of a system in general — at worst, its absurdly huge size and power/RF combo switchbox could be annoying to deal with, but Atari eventually did away with the latter and were working on a smaller revision when the market crashed, forcing them to discontinue the system — its controllers were a different matter entirely. In many ways they were ahead of their time, with analogue movement along with start and pause buttons. Unfortunately, Atari cheaped out and didn't bother providing them with an auto-centring mechanism, along with building them out of such cheap materials that they usually tended to fail after a few months, if not weeks. The poor-quality controllers subsequently played a major part in dooming the system.
    • Likewise, the Atari 7800 was an overall reasonably well-designed system, whose primary flaw was that it wasn't suited to the direction that the NES was pushing console game development innote . One design decision did stand out as particularly baffling, however, namely the designers apparently deciding that they could save a few bucks by not bothering with a dedicated audio chip, and instead using the primitive sound hardware in the Atari 2600 graphics processor that was included for back-compatibility purposes. By the time the 7800 was widely released, however, console games were widely expected to have in-game music, meaning that 7800 games were either noticeably lacking in this regard or, worse still, tried to use the 2600 audio hardware to render both music and sound effects, usually with horribly shrill and beepy results, such as in the system's port of Donkey Kong. Yes, the 7800's cartridge port had support for audio expansion via an optional POKEY sound chip on the cartridge, but since that meant the cost to give the 7800 proper sound capabilities suddenly fell on the developer, only two games ever included a POKEY chip, with all other games not bothering to include one due to increased manufacturing costs.
    • The Atari Jaguar suffered from this in spades:
      • The main hardware seemed to be designed with little foresight or consideration to developers. The "Tom" GPU could do texture-mapping, but it couldn't do it well (even the official documentation admits that texture-mapping slows the system to a crawl) thanks to VRAM access not being fast enough. This is the reason behind the flat, untextured look of many 3D Jaguar games and meant that the system could not stand up graphically to even the 3DO which came out around the same time, let alone the Nintendo 64, Sony PlayStation, or even the Sega Saturn, dooming the system to obsolescence right out of the gate. Yes, texture-mapping in video games was a fairly new thing in 1993 with the release of games like Ridge Racer, but the 3DO came out a month prior to the Jaguar with full texture-mapping capabilities, so one would think someone at Atari would catch on and realize that texture-mapped graphics were the future.
      • On the audio side, the system lacked a dedicated sound chip. Instead, it came with a "Jerry" DSP that was supposed to handle audio capabilities, except said audio capabilities were limited and the chip was capable of math calculations as well (and it couldn't do both without heavily taxing the system), so instead of using the DSP as a sound chip, many developers opted to use it as a math co-processor to make up for the "Tom" counterpart GPU chip's shortcomings when used as a "main" CPU instead. The result was that many Jaguar games lacked music, most infamously the Jaguar port of Doom.
      • Finally, there was the inclusion of the Motorola 68000 CPU. It was intended to manage the functions of the "Tom" and "Jerry" chips, but since it just so happened to be the exact same chip used in the Sega Genesis, developers were more familiar with it as opposed to the poorly documented "Tom" and "Jerry" chips, and chose to use the 68000 as the system's main CPU instead of bothering to figure out how to balance the functions of the "Tom" and "Jerry" chips. The end result of all of this was a very difficult and cumbersome system to program for that was technically underwhelming, "64-bit"note  capabilities be damned.
      • The controller only included three main action buttons, a configuration which was already causing issues for the Sega Genesis at the time. In a baffling move, the controller also featured a numeric keypad, something that Atari had last done on the 5200. On that occasion the keypad was pretty superfluous and generally ignored by developers, but it was only taking up what would probably have been unused space on the controller, so it didn't do any harm by being there. The Jaguar's keypad, on the other hand, was far bigger, turning the controller into an ungodly monstrosity that has often been ranked as the absolute worst videogame controller of all-time.note  Atari later saw sense and produced a revised controller that added in three more command buttons and shoulder buttons, but for compatibility reasons they couldn't ditch the keypad - in fact, the five new buttons were just remaps of five of the keypad buttons - meaning that the newer version was similarly uncomfortable. The Jaguar's controller was in fact designed originally for the Atari Panther, their unreleased 32-bit console that was scheduled to come out in 1991 before it became obvious that the Genesis' 3-button configuration wasn't very future-proof. They evidently figured that the keypad gave them more than enough buttons and didn't bother creating a new controller for the Jaguar, a decision that would prove costly.
      • Things were turned up to eleven by the Atari Jaguar CD add-on. Aside from the crappy overall production quality of the add-on (the Jaguar itself wasn't too hot in this department, either) and poor aesthetics which many people have likened to a toilet seat, the CD sat on top of the Jaguar and often failed to connect properly to the cartridge slot, as opposed to the similar add-ons for the NES and Genesis which used the console's own weight to secure a good connection.

        Moreover, the disc lid was badly designed and tended to squash the CD against the bottom of the console, which in turn would cause the already low-quality disc motor to break apart internally from its fruitless attempts to spin the disc. Even inserting the Memory Track cartridge wrong can cause the main Jaguar unit to return a connection error, due to cartridges in the main console taking boot priority over CDs in the addon, so if the Memory Track cartridge isn't inserted properly it's simply read as a regular game cartridge that isn't inserted properly and cause a failure to boot.

        All of this was compounded by Atari's decision to ditch any form of error protection code so as to increase the disc capacity to 800 megabytes, which caused software errors aplenty, and the fact that the parts themselves tended to be defective. And remember, this was an add-on for a console that only sold 125,000 units, making its very existence an example of idiot design. Only 25,000 were produced and nowhere near that amount were sold or even shipped (and only 13 games were ever released). Due to scarcity, Jaguar CDs go for ungodly amounts of money on auction sites, and due to the generally poor design, you're more likely to end up Trapped in Another World than you are to get a working unit.
      • Of note, it was not rare for the device to come fresh from the box in such a state of disrepair that highly trained specialists couldn't get it working - for example, it could be soldered directly to the cartridge port and still display a connection error. This, by the way, is exactly what happened when James Rolfe tried to review the system, and while Noah Antwiler of The Spoony Experiment was able to get his defective unit working, it immediately died for good as soon as he finished recording footage for the review he used it in.
      • As the Angry Video Game Nerd pointed out in his review of the console, the Jaguar is a top-loading console that lacks a door to protect the pin connectors from dust and moisture. This means you have to keep a game cartridge in the console at all times to protect it from damage. The Jaguar CD fixes the problem by having a door handle, but if you have a broken one the cartridge component of the add-on won't work!

    Game Consoles - Microsoft 
  • Microsoft's Xbox series aren't exempt from these, either:
    • Most revisions of the original Xbox used a very cheap clock capacitor with such a high failure rate that it's basically guaranteed to break and leak all over the motherboard after a few years of normal use, far shorter than the normal lifetime of this type of component. Making this more annoying is that the clock capacitor is not an important part: it does not save time information if the system is unplugged for more than 30 minutes and the console works fine without it. The last major revision (1.6) of the system uses a different, better brand and is exempt from this issue.
    • The infamous "Red Ring of Death" that occurs in some Xbox 360 units. It was a consequence of three factors: the introduction of lead-free solder, which is toxicologically safer but harder to properly solder with; inconsistent quality of the solder itself, which got better in later years but was prone to cracking under stress in early revisions; and bad thermal design, where clearance issues with the DVD drive caused Microsoft to use a dinky little heatsink for chips that were known to run hot. Result: the chips would overheat, the defective and improperly-applied solder would crack from the heat expansion, and the connections would break.
    • Microsoft released official numbers stating that 51.4% of all early 360 units were or would eventually be affected by this issue. Unfortunately, the problem got blown out of proportion by the media, so much so that people were afraid of encountering the issue on later versions that weren't affected. So afraid, in fact, that they'd often send in consoles that had a different and easily solvable issue: only "three segments of a red ring" mean "I'm broken, talk to my makers"; other red-ring codes could be as simple as "Mind pushing my cables in a bit more?", something easy to figure out if you Read the Freaking Manual.
    • The 360 has another design flaw that makes it very easy for the console to scratch your game discs if the system is moved while the game disc is still spinning inside the tray. The problem is apparently so insignificant amongst most Xbox 360 owners (though ironically Microsoft themselves are fully aware of this problem) that when they made the Slim model of the system they fixed the Red Ring issues (somewhat) but not the disc scratching issue.
      • Most mechanical drives can tolerate movement while active, at least. It's not recommended (especially for hard drives, where the head is just nanometers away from the platter), but not accounting for some movement is just bad. Anyone that has worked in a game-trading industry (such as Gamestop/EB Games) can tell you that not a day goes by without someone trying to get a game fixed or traded in as defective due to the evil Halo Scratch.
      • Microsoft recommends to not have the original Xbox One model in any position other than horizontal because the optical drive isn't designed for any orientation other than that. Every 360 model was rated to work in vertical orientation, even with the aforementioned scratching problem, and Microsoft quickly restored support for vertical orientation with the updated Xbox One S model.
    • Most of the 360's problems stem from the inexplicable decision to use a full-sized desktop DVD drive, which even in the larger original consoles took almost a quarter of their internal volume. Early models also had four rather large chips on the motherboard, due to the 90 nm manufacturing process, which also made them run quite hot (especially the GPU-VRAM combo that doubled as a northbridge). But the relative positions of the GPU and the drive (and the latter's bulk) meant that there simply wasn't any room to put any practical heatsink. Microsoft tried to address this problem in two separate motherboard redesigns, the first of which finally added at least some heatsink, but it was only a third, when the chipset was shrunk to just two components, which allowed designers to completely reshuffle the board and add a little fan atop the new, large heatsink, which finally did away with the problem somewhat. However, even the Slim version still uses that hugeass desktop DVD drive, which still has no support for the disk, perpetuating the scratching problem.
    • The circular D-Pad on the 360's controller (likewise for the Microsoft SideWinder Freestyle Pro gamepad), which is clearly designed to look cool first and actually function second. Anyone who's used it will tell you how hard it is to reliably hit a direction on the pad without hitting the sensors next to it. The oft-derided U.S. patent system might be partially responsible for this, as some of the good ideas (Nintendo's + pad, Sony's cross pad) were "taken". Still, there are plenty of PC pads that don't have this issue to the same degree... at least until the 360 became successful and every third-party pad started ripping off its controller wholesale, unusable D-Pad and all, with acceptable D-pad designs only finally making a comeback about fifteen years later. Some even go as far as to otherwise perfectly emulate an entirely different controller's design, then replace whatever D-Pad design the original used with a 360-style one for no reason whatsoever, often packaging it in such a way that you can't tell the D-pad is of that design until you've opened it.
    • The original 360's Optical Audio port was built into the analog video connector. If you wanted to utilize both HDMI video and Optical audio, the hardware supported both simultaneously. The ports, however, were placed too close together and the bulky analog connector prevented inserting an HDMI cord. Removing the plastic shroud on the analog connector allows you to use both at the same time.

    Game Consoles - Nintendo 
Nintendo has made their fair share of blunders over the years as well:
  • When Nintendo of America's engineers redesigned the Famicom into the Nintendo Entertainment System, they removed the pins which allowed for cartridges to include add-on audio chips and rerouted them to the expansion slot on the bottom of the system in order to facilitate the western counterpart to the in-development Famicom Disk System. Unfortunately, not only was said counterpart never released, there was no real reason they couldn't have run audio expansion pins to both the cartridge slot and expansion port, other than the engineers wanting to save a few cents on the necessary IC. This meant that not only could no western NES game ever have any additional audio chips, it also disincentivised Japanese developers from using them, as it would entail reprogramming the entire soundtrack for a western release.

    Additionally, while the front-loader design of the original NES is well recognized and iconic, the VCR-like design isn't very helpful for seating cartridges firmly into the reader without causing wear and tear on the pins which leads to the connector pins being bent out of shape over time. This was not helped by the choice of brass-plated nickel connectors that are prone to oxidation and therefore require cleaning.
  • That said, the Famicom and its peripherals weren't exactly free from their own design flaws either:
    • Infamously, the controllers were directly plugged onto the motherboard, requiring the console to be opened up in order to replace them. (Though no soldering is required, mercifully.) This would be bad enough, but the controller wires were also pathetically short (18 inches - the NES controller wires were three times that, for comparison) and plugged into the back of the console, meaning you pretty much had to play with the Famicom right next to you.
    • The majority of the Disk System's "Disk Cards" (effectively a proprietary form of floppy disk) had no protective shutter on the access window, meaning careless or clumsy users could accidentally touch the disk surface and wipe out sections of data.
  • The Super Nintendo Entertainment System was an overall improvement over the NES, doing away with the front-loading cartridge slot altogether and using better-quality pin connectors. That being said, there's still one notable flaw in the system in that the power plug on the back of the system is prone to breaking. Whereas the inner barrel of the power plug on the NES is made of metal, the SNES, on the other hand, uses a plastic barrel instead. This results in a more fragile power port in which the inner barrel breaks off from the stress of repeatedly plugging and unplugging the power cord from the system end. It's not uncommon to see used consoles with broken AC In plugs. Thankfully you can buy replacements, but you will need to know how to solder to replace the piece.
  • The Virtual Boy was a poorly-designed console in general, but perhaps the strangest design flaw was the complete absence of a head strap, which apparently was meant to have one, but was dropped. While this was ostensibly because of fears that the weight of the device could cause neck strain for younger players, for one thing pre-teens weren't officially supposed to be playing the device anyway, and for another thing the solution they came up with was a fixed 18-inch-tall stand that attached to the underside of the system. This meant that if you didn't have a table and chair that were the exact right height, you'd likely end up straining your neck and/or back anyway, in addition to the eye strain that the system was notorious for. Even the R-Zone, a notoriously poor Shoddy Knock Off Product of the system, managed to make room for a head strap in the design.
  • The Game Boy Color is designed such that electrical interference from the system-on-a-chip causes slight distortions to the system's sound output.
  • The original model Game Boy Advance had no backlight for its screen. This was the case for the entire Game Boy line up until the SP (with one exception, the Japan-only Game Boy Light) to cut costs and increase battery life, but while older models got away with this thanks to their more simplistic graphics, the Advance's more, well, advanced graphics were harder to see against the system's unlit LCD screen. Worse is that the Advance used a darker screen than the Color, so even Game Boy Color games were affected to an extent, and many late Color gamesnote  and early Advance games had to use brighter color palettes to compensate. Some games such as Castlevania: Circle of the Moon were virtually unplayable due to the darker color palette that made the graphics difficult to see except in the best lighting conditions. In 2003, Nintendo finally rectified this with the Game Boy Advance SP, which reused the original GBA's screen but added a built-in front light, which solved the visibility problem at the expense of washed-out colors (the original DS would have the same issue). In 2005, they released another revision of the SP with a backlit screen (the same screen used in the DS Lite), solving the problem completely.
  • The GameCube was able to play games in 480p resolution if the game supported it. However, the GameCube itself didn't output an analog 480p signal. It would only output 480p through a digital signal, where it would be converted back into an analog signal through system's component cables, which contained a special DAC in the plug. Nintendo quietly discontinued production of the cables since less than 1% of consumers bought them and the cables were too expensive to make without recouping costs. Because the 480p signal can't be achieved without Nintendo's component cables, it was, for a time, impossible for a consumer to simply use third-party component cables. What's especially baffling is the presence of a separate digital port for component in the first place, when the SNES illustrated it was possible to output high-quality RGB video directly from the existing analog port. Rumor has it that the digital port was intended for the scrapped stereoscopic 3D add-on, and the cancellation of the add-on led to the port being repurposed as a component cable port. Luckily, the component cables' internal DAC was eventually reverse-engineered and cheaper aftermarket cables began hitting the market. Not only that, it was discovered that the digital signal coming from the GameCube's Digital AV port was compatible with the HDMI standard and HDMI adapters began hitting the market as well, or you could just rip out the Digital AV port entirely and replace it with a regular HDMI port. In addition, Nintendo made the Wii (which plays GameCube discs) output the analog 480p signal directly from the console rather than processing a digital signal through the cables themselves (the component cables are still needed, albeit they're much cheaper), thus making the Wii a cheaper option to play GameCube games in 480p compared to buying the GameCube cables secondhand at a premium price.
  • The Nintendo DS only supported WEP Wi-Fi encryption, in an era where the much more secure WPA standard was rising. Wanted to play online with a WPA router? You'd either need to set your router's encryption to WEP, disable your router's password entirely (both present security vulnerabilities), or buy a special dongle from Nintendo to plug into your computer and act as a "middleman" between your DS and your router, which was discontinued in 2007 due to a lawsuit towards co-designer Buffalo from the Australian government.
  • The Wii has no crash handler. So if you manage to crash your system, you open it up to Arbitrary Code Execution, and a whole load of security vulnerabilities await you. Do you have an SD card inserted? Well, crash any game that reads and writes to it and even more vulnerabilities open up. They'll tell you that they fixed these vulnerabilities though system updates, but in reality, they never did. In fact, the only thing these updates did on that matter was simply remove anything that was installed with these vulnerabilities - nothing's stopping you from using these vulnerabilities again to re-install them. All of this is a good thing if you like modding your console.
  • Wii U:
    • While the console ultimately proved a failure for several reasons, poor component choices helped contribute to its near-total lack of third-party support. It'd only be a slight exaggeration to say that the system's CPU was just three heavily overclocked Wii CPUs — the Wii's own CPU in turn being just a higher-clocked version of the CPU that had been used a decade earlier in the Nintendo GameCube — slapped together on the same die, with performance that was abysmally poor by 2012 standards. Its GPU, while not as slow, wasn't all that much faster than those of the PS3 and Xbox 360,note  and used a shader model in-between those of the older consoles and their successors, meaning that ported PS3/360 games didn't take advantage of the newer hardware, while games designed for the PS4 and Xbox One wouldn't even work to begin with due to the lack of necessary feature support. While Nintendo likely stuck with the PowerPC architecture for Backwards Compatibility reasons, the system would likely have fared much better if Nintendo had just grabbed an off-the-shelf AMD laptop APU - which had enough power even in 2012 to brute-force emulate the Wii, eliminating the main reason to keep with the PowerPC line - stuffed it into a Wii case and called it a day. Fortunately, Nintendo seems to have learned from this, basing the Nintendo Switch on an existing nVidia mobile chip which thus far has proven surprisingly capable of punching above its weight.
    • The console came with a paltry amount of on-board storage: 32GB on the Deluxe models, and a mere 8GB for the Basic ones. While this could be expanded with an external hard-drive, the USB ports on the console don't put out enough power by themselves to power a hard-drive, requiring either one that can be powered externally, or a Y-splitter cable that has two USB plugs at one end.
  • Nintendo Switch:
    • Don't buy a screen protector, or else whatever adhesive that's on it will melt off due to the console's methods of cooling. Not that you'd really need one anyway due to how durable it is.
    • Another issue with the Nintendo Switch is its implementation of USB-C. When using such standards, the point is that anything designed for the standard is supposed to be able to work with any other device that uses it. However, this is the first time a Nintendo console supports a USB standard for their powering, and it shows. For whatever reason, especially after the 5.0 update, there have been reports of Switches bricking due to using third-party docks. It apparently has to do with not following the USB-C standard properly. This issue has become prevalent enough to force Nintendo to respond on the situation. It was later found that the reason why certain third party docks would brick Switches was because Nintendo wanted to make taking the Switch in and out of the dock a smooth action, and so tweaked with the mechanical design of the USB-C on the dock so that the plug was just ever so slightly smaller. Cheaper third party docks like those from Nyko would try to emulate this smooth action, but did so by way of a "just do something and see if it works" approach. This caused an issue where some pins made contact with others. This isn't necessarily a problem on its own, but the real problem is these cheaper docks don't implement USB-C power delivery correctly. The controller chip on the dock was sending 9V to another pin on the Switch that was expecting 5V. You can imagine that sending almost twice the necessary voltage is a bad thing.
    • When the Nintendo Switch was first released, homebrewers were poking around to find anything they can use to effectively jailbreak the system. Much to their surprise, they found NVIDIA's own programming guide which tells you how to disable the security inside the system, meaning that you can run unsigned code. This sounds fine as programming guides are intended for developers and disabling the security is useful for testing. The problem is that they forgot to disable this, and because this is a hardware flaw, Nintendo couldn't patch this out with system updates. This was corrected on future hardware revisions starting with the normal Switch model in red boxes. This is also the reason why the first generation of the Nintendo Switch models is very pricey online, as this oversight is very valuable to homebrewers.
      • Another problem with the Nintendo Switch: the Joy-Con joysticks. The contacts inside them were made out of a cheap material, causing them to be worn down really quickly by only a few months of play and "drift" (accepting more inputs than those that are currently being submitted, causing the game to move on its own and screwing you up). The situation got bad enough that a class-action lawsuit was filed, because in a disturbingly out of character moment Nintendo refused to comment or release any sort of troubleshooting guide for the issue.
    • The Switch also has a problem with the heat it generates. Because of the way the Switch's innards were designed, the system can become quite warm. Over time, the constant heat can cause the Switch to physically bend out of shape. Nintendo would later make a revised version of the Switch as well as the Switch Lite that addressed the problem.
    • One particular design oddity is that the USB 3.0 port located on the back of the dock is limited by software to USB 2.0 speeds. Rumors of a patch to "unlock" the port to USB 3.0 speeds circulated but no such patch has been released. It turns out that USB 3.0 can interfere with 2.4GHz wireless transmitters if not properly shielded and if placed too close to the transmitter, as was the case with the USB 3.0 port on the Nintendo Switch dock. Nintendo likely noticed the problem during QA, and instead of replacing it with a USB 2.0 port or even investing money into properly shielding the circuit board in the dock, Nintendo simply decided to disable USB 3.0 functionality altogether. Indeed, hacking in USB 3.0 support causes issues with wireless controller functionality, and the port was replaced with an ethernet port (which still runs through a USB 2.0 bus) for the OLED model's dock (HEG-007).
    • The battery/motherboard implementation in all models of the Nintendo Switch has one issue not seen anywhere else. It was designed so poorly that if the user lets the battery deplete completely and then leaves it like that for months (at least two months may trigger this), the next time it is recharged, the electricity can fry the CPU, bricking the system.

    Game Consoles - Ouya 
  • The Kickstarter-funded Ouya console has gone down in history as having a huge raft of bad ideas:
    • As with many console systems, Ouya gave the user the option of funding their account by buying funding cards at retail, which provided codes the user can type in to add money to their account. Unfortunately, the Ouya will not proceed beyond boot unless a credit card number is entered, making the funding cards a pointless option.
    • When an app offered an in-app purchase, a dialog was displayed asking the user to confirm the purchase - but no password entry or similar was required, and the OK button was the default. This means that if you pressed buttons too quickly while an app offered a purchase, you could confirm it accidentally and be charged for an in-app purchase.
    • The system was touted from the beginning as open and friendly to modification and hacking. This sparked considerable interest, and it became obvious that a sizable part of the supporting community didn't really give two hoots about the Ouya's intended purpose as a gaming console; rather, they just wanted one to hack and make an Android or preferably Linux computer out of. The Ouya people - who, like every other console manufacturer, counted to make profit more from selling the games than the hardware - promptly reneged on the whole openness thing and locked the console down tight. The end result was a single-purpose gadget that had a slow, unintuitive, and lag-prone interface, couldn't run most of the already-available Android software despite being an Android system, and didn't have many games that gamers actually wanted to buy.
    • Also, the HDMI output was perpetually DRMed with HDCP. There wasn't a switch to disable it, not even turning on developer mode. People who were expecting the openness promised during the campaign were understandably angry for being lied to, as were those hoping to livestream and record Let's Plays of the games.
    • Even in its intended use, the Ouya disappointed its users. The main complaint is that the controllers are laggy; on a console with mostly action-packed casual games, this is very bad. It wasn't even a fault of the console itself, as a controller which exhibits this on an Ouya will have the same input lag when paired to a computer. Apparently, not everyone's controllers have this issue, so opinions differ on whether it was just a large batch of faulty controllers or a design flaw that came out during beta testing but was knowingly ignored and quietly corrected in subsequent batches.
    • The fan used to prevent overheating isn't pointed at either of the two vents. Never mind that the console uses a mobile processor, which doesn't even need a fan. In theory, the fan would allow the processor to run at a higher sustained speed. In practice, it blows hot air and dust directly against the wall of the casing, artificially creating frequent issues due to overheating.

    Game Consoles - Sega 
  • Sega is known for being a long rival to Nintendo, but one thing they do share in common is the poor decisions for their hardware:
    • Sega's most commercially successful console, the Sega Genesis, had its own share of mistakes:
      • The Genesis, unlike its chief rival the Super Nintendo Entertainment System, was fully backwards compatible with the Sega Master System. However, it wasn't ready to play Master System games out of the box. Instead of making the cartridge slot able to accept Master System games, you had to shell out extra cash for a separate Power Base Converter, basically a Master System cartridge adapter. Having to buy a separate attachment just to have access to your library of previous-gen games almost defeats the whole point of backwards compatibility in the first place. This alone is bad enough, but the way it was designed meant it wouldn't fit into the Genesis 2. Since the Master System was actually successful in Europe, they got a special adapter that was remolded to be compatible with the Genesis 2. American and Japanese markets weren't extended the same courtesy.
      • Bizarrely, the original incarnation of the Genesis, despite being able to output stereo sound, did not output it through the console's AV port, instead opting to output it through the console's front headphone jack. This meant having to buy a 3.5mm to stereo RCA converter and running it from the front of your console to the back of your TV or stereo system if you wanted stereo sound, not exactly an elegant solution. The Genesis 2 rectifies this problem by ditching the headphone jack and outputting stereo right out of the AV port, at the expense of a lower-quality sound chip.
    • The Sega Saturn is, despite its admitted strong points on the player end, seen as one of the worst major consoles internally. It was originally intended to be the best 2D gaming system out there (which it was), so its design was directly based on the 32X with higher clockspeeds, more memory, and CD storage. However, partway through development Sega learned of Sony's and Nintendo's upcoming systems (the PlayStation and Nintendo 64 respectively) which were both designed with 3D games in mind, and realized the market - especially in their North America stronghold - was about to shift under their feet; they wouldn't have a prayer of competing. So, in an effort to try to bring more and more power to the console, Sega added an extra CPU and GPU to the system, which sounds great at first... until you consider that there were also six other processors that couldn't interface too well. This also made the motherboard prohibitively complex, being the most expensive console at the time. And lastly, much like the infamous Nvidia NV1 which has its own example in this very page, the GPU worked on four-sided basic primitives while the industry standard was three sides, a significant hurdle for multiplatform games as those developed with triangular primitives would require extensive porting work to adapt them to quads. All this piled-on complication made development on the Saturn a nightmare. Ironically, consoles with multiple CPU cores would become commonplace two generations later with the Xbox 360 and PlayStation 3; like a lot of Sega's various other products of that era, they had attempted to push new features before game developers were really ready to make use of them.
    • The Sega Dreamcast, for the most part, was a solidly designed machine overall, taking many lessons learned from the Saturn and applying them to the Dreamcast. That didn't mean it didn't had its own share of mistakes:
      • Ironically, one aspect of the Dreamcast that is much worse than the Saturn is the system's security. The Saturn had a robust security system similar to the Sony PlayStation that took decades to defeat,note  so when Sega was designing the Dreamcast's copy protection mechanism, they took what they learned from the Saturn and threw it in the garbage in favor of using a proprietary GD-ROM format (this same format was also being used on their arcade hardware at the time) to boot games from as the console's sole security. On paper, this seemed like a good idea, but there was one gaping hole in the Dreamcast's security system: the system can accept another of Sega's proprietary formats called MIL-CD, which was like an Enhanced CD but with Dreamcast-specific features. The format was a major flop with no MIL-CDs ever releasing outside of Japan, but pirates quickly figured out that MIL-CD had no hardware-level copy protection. A MIL-CD's boot data was scrambled from the factory, and the Dreamcast contained an "unscrambler" that would descramble the boot data into something readable and boot into the game. A Dreamcast SDK was all pirates needed to defeat this, and running pirated games on the Dreamcast was as easy as burning a cracked ISO onto a CD-R and putting it in the machine. Sega removed MIL-CD support on a late revision of the Dreamcast to combat this, but it was too late, and the Dreamcast would become the most pirated disc-based home console of all time.
      • On a lesser note, the Dreamcast could only read 100KB of save data from a single VMU at a time, with no room for expansion. Compared to the PlayStation 2's gargantuan 8MB memory card, this was absolutely tiny. They attempted to release a 4X Memory Card, but it had to work around the Dreamcast's design flaw of only reading 100KB from a card at a time by separating the 400KB of space into four "pages". The major downside is that games couldn't be stored over multiple pages as it was four memory cards in one, and some games wouldn't detect it or would outright crash when trying to read from it. You also couldn't copy game saves between pages without a second memory card, and the 4X Memory Card didn't support any VMU-specific features, as it lacked a screen and face buttons.
      • One of the most commonly praised aspects of the Sega Saturn was its controller. The Sega Dreamcast pad, in what was certainly an additional sore spot for anyone scorned by the Saturn's early cancellation, was a disappointment. The overall idea wasn't a bad one: an evolution of the Saturn 3d pad with a slot for its own Pocket Station style Memory Card module, the VMU. But alas, they bungled it up with some questionable choices that were considered serious steps back. The DC pad had two fewer buttons than the Saturn or any of the other pads at the time; according to Kenji Tosaki, two of the face buttons were removed at the request of developers to simplify game control, with executives and marketing following suit, reasoning that fighting game enthusiasts would purchase arcade sticks.note  Any game needing more inputs would need to make do with the analog shoulder triggers, which due to the travel length, were less than ideal for any game not making full use of them. The d-pad was a cross pad that protruded from the curvature of the rest of the controller, with edges that grated on thumbs from extended use, particularly from fighters, in what was a step back from the Saturn's legendary circular pad. All of this might be somewhat forgivable if the controller was easy to handle, but unfortunately, the DC pad was similar in size and shape to the Xbox's gargantuan-sized original "Duke" pad, and worse, the cord, possibly due to the required empty space for the VMU slot, protruded from the rear of the controller rather than the front, reducing available cord length by 6 inches and increasing the likelihood of accidentally yanking the console. Whoops. It's telling that while the Saturn was considered a comparative misstep, and even after being Vindicated by History has had less of its legacy revisited due to the less port-friendly nature of the system as well as the low localization ratio of games making re-releases few and far between, the Saturn pad, now seen as the ultimate expression of 2d game control, has been officially recreated ever since the PS2 era, with the Dreamcast pad as something of an evolutionary dead end.

    Game Consoles - Sony 
  • Sony's PlayStation line has had its fair share of baffling design choices:
    • The Series 1000 and Series 3000 units (which converted the 1000's A/V RCA ports to a proprietary A/V port) of the original PlayStation had the laser reader array at 9 o'clock on the tray. This put it directly adjacent to the power supply, which ran exceptionally hot. Result: the reader lens would warp, causing the system to fail spectacularly and requiring a new unit. Sony admitted this design flaw existed... after all warranties on the 1000 and 3000 units were up and the Series 5000 with the reader array at 2 o'clock was on the market.
    • The first batch of PS2s were known for starting to produce a "Disc Read Error" after some time, eventually refusing to read any disc at all. The cause? The gear for the CD drive's laser tracking had absolutely nothing to prevent it from slipping, so the laser would gradually go out of alignment.
    • The original model of the PSP had buttons too close to the screen, so the Einsteins at Sony moved over the switch for the Square button without moving the location of the button itself. Thus every PSP had an unresponsive Square button that would also often stick. The Square button is the second-most important face button on the controller, right before X; in other words, it's used constantly during the action in most games. Sony president Ken Kutaragi confirmed that this was intentional, conflating this basic technical flaw with the concept of artistic expression. This is a real quote sourced by dozens of trusted publications. The man actually went there.
      Ken Kutaragi: I believe we made the most beautiful thing in the world. Nobody would criticize a renowned architect's blueprint that the position of a gate is wrong. It's the same as that.
    • Another PSP-related issue was that if you held the original model a certain way, the disc would spontaneously eject. It was common enough to be a meme on YTMND and among the early Garry's Mod community.
    • The usage of an optical disc format on the PSP can qualify. On paper, it made perfect sense to choose optical discs over cartridges because of the former's storage size advantages and relative low manufacturing expense, and it enabled the release of high-quality video on the PSP. After all, utilizing optical discs instead of cartridges had propelled the original PlayStation to much greater success than its competition from Nintendo, so there was no reason to assume the same thing wouldn't happen again in the handheld market. However, in practice, optical discs quickly proved to be a poor fit for a handheld system: Sony's Universal Media Disc (UMD) was fragile, as there have been many cases of the outer plastic shell which protected the actual data disc cracking and breaking, rendering the disc useless until the user buys an aftermarket replacement shell. In addition, it wasn't uncommon for the PSP's UMD drive to fail due to wear of the moving parts. UMD also failed to catch on as a video format due to their proprietary technology making them more expensive to produce than other optical disc formats, and thus were priced higher than DVDs despite holding noticeably less data.note  There were also no devices besides the PSP that could play UMD movies, meaning that you were stuck watching your UMD movies on the PSP's small screen, and couldn't swap them with your friends unless they too had a PSP. This drove away consumers, who would rather purchase a portable DVD player and have access to a cheaper media library, while the more tech-savvy can rip their DVDs and put them on Memory Sticks to watch on PSP without a separate disc. In addition, UMD load times were long, compared to that of a standard Nintendo DS cartridge, which Sony themselves tried to fix by doubling the memory of the PSP in later models to use as a UMD cache. By 2009, Sony themselves were trying to phase out the UMD with the PSP Go, which did not have a UMD drive and relied on digital downloads from the PlayStation Store, but it was too late; most games were already released on UMD while very few were actually made available digitally, and so the Go blocked off a major portion of PSP games without offering any real advantage to make up for it because the PlayStation Store was available on all PSP models at the time, which led to consumers ignoring the Go. Sony's decision to use standard cartridges for the PSP's successor, the Play Station Vita, seemed like a tacit admission that using UMD on the PSP was a mistake.
    • Like the Xbox 360, the PlayStation 3 suffered from some growing pains. It also used the same lead-free solder which was prone to breakage, but while Sony designed the PS3 for quiet operation and overall had a better cooling system than the Xbox 360, there was one major problem: that Sony had used low-quality thermal paste for the Cell and RSX processors that were prone to drying up, and they dried up quickly. The result? PS3s would begin running loud mere months after being built, and since the processors were no longer making proper contact with the heatsinks, this made them prone to overheating, shortening the chips' lifespan significantly, especially the RSX, and potentially disrupting connections between the chips and the motherboard due to extreme heat. Worse is that Sony connected the chip dies to their integrated heat spreaders with that same thermal paste instead of solder, requiring a delidding of the chips to fully correct the problem, which could potentially brick the system if not performed correctly. But that wasn't all: Sony also used NEC/TOLKIN capacitors in the phat model PS3s, which, while less expensive and more compact than traditional capacitors, also turned out to be less reliable and were prone to failure, especially under the excessive heat created by the thermal paste drying out, or under the stress of running more demanding titles from late in the system's life like Gran Turismo 6 or The Last of Us. Sony corrected these problems with the Slim and Super Slim models.
    • Reliability issues aside, the PS3's actual hardware wasn't exactly a winner either, and depending on who you ask, the PS3 was either a well-designed if misunderstood piece of tech or had the worst internal console architecture since the Sega Saturn. Ken Kutaragi envisioned the PlayStation 3 as "a supercomputer for the home", and as a result Sony implemented their Cell Broadband Engine processor, co-developed by IBM and Toshiba for supercomputer applications, into the console. While this in theory made the console much more powerful than the Xbox 360, in practice this made the system exponentially more difficult to program for as the CPU was not designed with video games in mind. In layman's terms, it featured eight individually programmable "cores", one general-purpose, but the others much more specialized and had limited access to the rest of the system. Contrast to, say, the Xbox 360's Xenon processor, which used a much more conventional three-general purpose core architecture that was much easier to program, and that was exactly the PS3's downfall from a hardware standpoint. The Cell processor's unconventional architecture meant that it was notoriously difficult to write efficient code for (for comparison, a program that only consists of a few lines of code could easily consist of hundreds of lines if converted to Cell code), and good luck rewriting code designed for conventional processors into code for the Cell processor, which explains why many multi-platform games ran better on the 360 but worse on PS3: many developers weren't so keen on spending development resources rewriting their game to run properly on the PS3, so what they would do instead is run the game on the general-purpose core and ignore the rest, effectively using only a fraction of the system's power. While developers would later put out some visually stunning games for the system, Sony saw the writing on the wall that the industry had moved towards favoring ease of porting across platforms over individual power brought by architecture that is highly-bespoke but hard to work with, and Sony abandoned weird, proprietary chipsets in favor of off-the-shelf, easier to program for AMD processors for their PS4 and onward.
    • A common criticism of the PlayStation Vita is that managing your games and saves is a tremendous hassle: for some reason, deleting a Vita game will also delete its save files, meaning that if you want to make room for a new game you'll have to kiss your progress goodbye. This can be circumvented by transferring the files to a PC or uploading them to the cloud, but the latter requires a PlayStation Plus subscription to use. One wonders why they don't allow you to simply keep the save file like the PS1 and PSP games do. This is made all the more annoying by the Vita's notoriously small and overpriced proprietary memory cards (itself possibly based on Sony's failed Memory Stick Micro M2 format, not to be confused with M.2 solid state drives, as they have very similar form factors, but the M2 is not compatible with the Vita), which means that if you buy a lot of games in digital format, you probably won't be able to hold your whole collection at the same time, even if you shell out big money for a 32GB (the biggest widely-available format, about $60) or 64GB (must be imported from Japan, can cost over $100, and is reported to sometimes suffer issues such as slow loading, game crashes, and data loss) card.
    • And speaking of the Vita, the usage of proprietary memory cards can count as this: rumor has it that the Vita was designed with SD cards in mind, but greedy executives forced Sony engineers to use a proprietary memory card format, crippling the system's multimedia capabilities if the biggest memory card you can buy for the system is only 32GB (64GB in Japan) when a sizable MP3 library can easily take up half of that. Worse is that the Vita came with no memory card out of the box and had no flash memory, so any unsuspecting customer might be greeted with a useless slab of plastic until they shell out extra cash for a memory card. Sony's short-sighted greed over the memory cards is cited as one of the major contributing factors of the console's early demise. The PCH-2000 models (often nicknamed the PS Vita Slim) come with internal flash memory, but the damage was already done, and if you insert a memory card, you cannot use the flash memory.
      • Another negative side-effect of Sony using proprietary memory cards for the Vita is how user-unfriendly it is to store media on the console. Yes, the PSP used Sony's proprietary Memory Stick Duo, but at least it was widely adopted by Sony (and some third parties) outside the PSP and thus memory card readers for the Memory Stick Duo are readily available to this day, and with the PSP, you can plug it into a computer and simply drag and drop media files onto the system. The Vita doesn't do that: in what was possibly an effort to lock down the Vita in wake of the rampant hacking and piracy on the PSP (which may have also influenced the decision to use proprietary memory cards), Sony made it so that the Vita needed to interface with special software installed on the user's PC called the Content Manager Assistant to transfer data. In addition, the user needed to select a directory for the Vita to copy from and select which files to copy from the Vita, which is much less convenient than simply dragging and dropping files directly onto the system like you would with a smartphone or the PSP. Also, you need to be logged into the PlayStation Network to do any of this. Finally, this was all rendered moot when hackers found an exploit that allowed Vita users to install unauthorized software that enabled the running of homebrew to the system. This was accomplished via, you guessed it, the Content Manager Assistant.
    • The way the Vita handles connecting to cellular networks. If implemented correctly, the Vita could've been wildly innovative in this regard. However, that's not what Sony did. Sony's biggest mistake was opting to have the Vita connect to 3G in a period where 4G had already overtaken 3G in popularity, meaning customers likely could not use their existing data plans to connect their Vitas to the internet over cellular. But that wasn't all: 3G connectivity was exclusive to AT&T subscribers, meaning that even if you were still on 3G, if you were subscribed to a carrier other than AT&T, you still had to purchase a separate data plan just to connect your Vita to the network, which was $30 monthly for a mid-tier data limit. Even still, 3G functionality was extremely limited, only allowing the user to view messages, browse the internet, download games under 20MB, and play asynchronous (i.e. turn-based) games online. It was such a hassle for such a restricted feature (and many smartphones allowed users to tether Wi-Fi devices to connect to the cellular network, which while not as convenient, allowed users to use their Vita as if it was connected to standard Wi-Fi) that it made it not worth the extra $50 for the 3G-enabled model. The implementation of 3G was so bad that Sony scrapped it altogether for the PS Vita Slim.
    • While the PlayStation 4 is mostly a well-built console, it has an Achilles' Heel in that the heat exhaust vents on the back are too large. The heat produced by the system invites insects to crawl inside the console, which can then short-circuit the console if they step on the wrong things. If you live in an area where insects are hard to avoid or get rid of, owning a PS4 becomes a major crapshoot.
    • Another, more minor flaw of the PS4 (one that persists across all three variations of the console, no less) is that the USB ports on the front are in a narrow recession which makes it impossible to use larger USB drives or cables with it.
    • The original run of PS4 consoles used capacitive touch sensors for the power and eject buttons. Unfortunately, the eject button had a known fault where it would activate spuriously, causing the console to spit out disks in the middle of games or even to resist them being inserted. Later versions replaced the capacitive sensors with physical buttons, but nothing was done for owners of the older PS4s - especially insulting given that disks could be ejected in software, so simply offering the user a menu option to disable the button would instantly work around the problem.
    • The PS4's hard drive is connected to the rest of the console using an internal SATA-to-USB interface. While this isn't a problem with the stock HDD, it will bottleneck an SSD should you choose to upgrade to one, keeping the upgraded load times from being as fast as they could be.
      • Another case of questionable conversions in the PS4: The HDMI output comes via a DisplayPort-to-HDMI converter chip, when the APU has a perfectly good HDMI output that goes completely unused. Not a performance problem, but just plain weird.

    Game Consoles - Other 
  • After insulting the childishness of the GBA through PR, Nokia created the complete joke of a design that was the original N-Gage. As a phone, the only way you could speak or hear anything effectively is if the user held the thin side of the unit to his/her ear (earning it the derisive nickname "taco phone" and the infamous "sidetalking"). From a gaming point of view it was even worse, as the screen was oriented vertically instead of horizontally like most handhelds, limiting the player's ability to see the game field (very problematic with games like the N-Gage port of Sonic Advance). Worst of all, however, is the fact that in order to change games one had to remove the casing and the battery every single time.
    • During the development of the N-Gage, Nokia held a conference where they invited representatives from various game developers to test a prototype model and give feedback. After receiving numerous suggestions on how to improve the N-Gage, Nokia promptly ignored most of them on the grounds that they were making the machines through the same assembly lines as their regular phones and they were not going to alter that.
    • In what was an early precursor to always-online DRM, the N-Gage required users to be connected to a cellular network to even play games, making the system virtually useless if you didn't either transfer your existing plan over to the N-Gage or buy a different cellular plan if you wanted to keep your old phone and have the N-Gage as a separate system. This was unfortunately the norm for many cell-phones at the time, which maybe it made sense for a regular cell-phone, but not for something that was being advertised as a competitor to the Game Boy Advance. Luckily, inserting a dummy SIM card can trick the N-Gage into thinking it's connected to a working cellular network and run games offline.
  • Much like the Atari 5200, the Intellivision wasn't a too badly-designed console in general - the only major issue was that its non-standard CPU design meant development wasn't quite as straightforward as on its contemporaries - but the controllers were a big issue. Instead of a conventional joystick, they used a flat disc that you had to press down on, which rapidly became tiring and didn't allow for very precise control. The action buttons on the side were also rather small and squat, making them difficult to push when you really needed to. However, by far the biggest issue was that Mattel for some reason decided that the controllers should be hard-wired to the console, making it impossible to swap them out for third-party alternatives or buy extension cables for players who preferred to sit further away from their TV set. The controller issues have been ascribed as one of the major issues why the Intellivision "only" managed to carve a niche out as the main alternative to the Atari 2600 in its early years, while the Colecovision - which was more powerful, easier to develop for, and despite having a controller that was just as bad, if not worse than the Intellivision's, could actually be swapped for third-party alternatives - started thoroughly clobbering the 2600 in sales when it arrived on the scene a few years later.
  • While it is fully in What Could Have Been territory, Active Enterprises, the developers of the (in)famous Action 52, had plans to develop a handheld console, the Action Gamemaster. Besides its own cartridges, it would have been able to play cartridges of other consoles as well as CD-ROM games besides having a TV tuner. Ergonomics aside, try to imagine the weight and autonomy of such a device with The '90s technology. And then look at the concept art and realize how bizarrely proportioned everything is, that there's no obvious place to put cartridges, CDs, or required accessories, and how vulnerable that screen is. The phrases "pipe dream" and "ahead of its time" barely even begin to describe this idea, and the fact Active Enterprises head Vince Perri thought it would be anything other than Awesome, but Impractical just goes to show how overambitious and out of touch with not only the video game market but technology in general the man was.
    miiyouandmii2: It doesn't even seem like it would be portable to begin with; if the screen was 3.2 inches big, overall, it was 15 inches wide!text on screen It's not really surprising, is it, that no-one tried to save Active Enterprises?
  • The Philips CD-i has a rather egregious design flaw: On some models (it's not exactly known which), when the internal battery dies, in addition to losing your save files and functions for the internal clock, there is a chance that a file stored in RAM required to boot the system can either be lost or corrupted, turning it into a regular CD player at best and effectively bricking the system at worst (read here for more information). Replacing the battery is no easy feat either: it's buried inside a "Timekeeper" chip, and the plastic casing must either be drilled away or the chip must be replaced entirely. Contrast this to the Sega Saturn where the battery is easily accessable in the system's Expansion Port (and when it dies, the worst that happens is that your save data, which can be backed up to an external memory cartridge, is deleted), or the Sony PlayStation which doesn't have an internal battery at all.
    • Another design oddity, but the CDI-450 model has no mechanism on the CD spindle to secure the disc in place.
    • Later in the system's life, it became abundantly clear that Philips did not design the CD-i with video games in mind due to lacking several game-specific hardware features like sprite-scaling and sprite-rotation, things that the Super Nintendo and Sega Genesis had to an extent (and were fully capable of with an additional graphics co-processor, and the latter system could push very primitive, untextured polygons without any help). The CD-i was more or less designed for basic interactive software, such as point-and-click edutainment games, so when Philips tried to shift the focus of the system to full-fledged video games such as the ill-fated Hotel Mario and the Zelda CD-i trilogy, these games already looked dated and primitive compared to games such as Donkey Kong Country and Star Fox. Any possible opportunities to rectify this via hardware revisions were stonewalled by Philips' CD-i Green Book standard (which was both a data specification and a hardware specification), who wouldn't budge on system specs, prioritizing wide compatibility over keeping their hardware up to date. This doomed the CD-i as a video game platform, especially as it was being supplanted by more powerful machines such as the 3DO (which could do everything the CD-i could do except better), the Sega Saturn, the Nintendo 64, and most damningly, the Sony PlayStation. Video games aside, this also ensured that the CD-i could not keep up with the rapidly evolving technology of the era, leaving it in the dust of more capable media appliances such as the DVD player.
  • The Nuon was a hardware standard for DVD players that enabled 3D video games to be played on the system in addition to offering enhanced DVD playback features. While it was intended as a competitor to contemporary sixth-generation video game consoles such as the PlayStation 2, GameCube, and Xbox, its hardware was woefully underpowered for the era, barely outperforming the Nintendo 64. This and the fact that Nuon circuitry was offered on so little models ensured that the format was dead on arrival.

    Video Games 
Sometimes, bad design decision can ruin a game even if it doesn't suffer from Idiot Programming.

  • Power Gig: Rise of the SixString was an attempt to copy the success of Guitar Hero and Rock Band. However, the game's poorly-designed peripherals, among other issues, caused it to fail and be mostly forgotten. Of note is its drum kit, which attempted to solve one of the main problems with its competitors' drums: playing drums in these games is very noisy, which makes the game impractical when living with other people or in an apartment. Power Gig's "AirStrike" drum kit gets around this by not having you hit anything: instead of hitting physical drum pads, you swing specially-made drumsticks above motion sensors, allowing you to drum silently. The downside is that since you're not hitting anything, it's hard to tell where you're supposed to swing, and whether that note you missed was because you didn't hit the right pad or the drum just failed to detect your movement. The lack of feedback made using these drums more frustrating than fun. Engadget's hands-on preview had few positives to say about it, which is particularly notable considering how such articles are meant to drum up hype for the game.
  • The drums that initially came with Rock Band 2 had a battery casing that wasn't tight enough, which would cause the batteries to come loose if it was hit too hard. Since this was a drum set, it was going to be hit repeatedly and harshly. The proposed solution from the developers was to stuff a folded paper towel into the battery case along with the batteries and hope that made everything stay in place. Later versions of drum sets wouldn't have such a problem, but it leaves one to wonder how this got missed in playtesting.
  • The Rock Revolution drumkit is seen by many as a monstrosity with an overcomplicated layout that doesn't make any sense. Konami were so focused on making "the most realistic drum peripheral on the market" - yet keeping the "cymbals" as pads - that they completely disregarded common sense and good design practices. The result is six pads scattered around a slab of plastic in a way that doesn't line up with the on-screen notes, making it extremely difficult to play, especially with gray notes, which none of Rock Revolution's competitors used and which are difficult to see on-screen. There's a reason Guitar Hero and Rock Band drumkits don't have realistic layouts, even with cymbals - realistic layouts only work when user-customised. One wonders why they didn't just use the same type of drumkit as Drum Mania.
  • The original cabinet for the first Street Fighter had two giant pressure-sensitive buttons for punches and kicks, with the strength of your attacks increasing the harder you slammed down on them. Putting aside the stiffness of the combat itself, the fact that the game all but encouraged hitting buttons as hard as possible meant that more often than not, players ended up damaging the buttons and the machine, making the cabinet an absolute maintenance nightmare. But even if you avoided literally breaking the game, repeatedly having to slam your fist down on big rubber buttons was a good way to make it very sore very quickly, and occasionally, players would end up missing them and damaging their hands on the cabinet. Needless to say, the two-button cabinet flopped, and Capcom quickly commissioned a new and cheaper cabinet design that replaced the two big rubber buttons with six smaller plastic buttons, with two rows for light, medium and heavy attacks. Not only did the new cabinet significantly outsell the original and make the game profitable, but the six-button control scheme would become the standard for the rest of the series, especially after it was paired with Street Fighter II’s much more refined combat.
    Professor Thorgi: Street Fighter 1 is the only fighting game in history that actually fought back.
  • At Quakecon 2019, Bethesda revealed and released a brand new set of ports for Doom, Doom II, and Doom³ on current consoles (PS4, Xbox One, and Nintendo Switch) and, in the case of the first two, smartphones. While people initially rejoiced, the ports for the first two games quickly came under major scrutiny - both for a variety of technical shortcomingsnote  that could likely be attributed to a rushed release date, and for the fact that the port inexplicably required players to sign-in to their Bethesda.net account to access the game at all, despite the fact that the port launched with no online features whatsoever, with even the multiplayer being local only. Outrage and wide-spread mockery quickly erupted across the internet due to the idea of a game originally released in 1993 requiring any kind of account log-in. Bethesda quickly apologised, claimed it was an accident, and would make the log-in optional in a future patch, with updates for the aforementioned technical problems also coming down the road. It turned out that the purpose of the log-in was to enable an Old Save Bonus for the then-upcoming Doom Eternal that would give the player Retraux costumes for the Doom Slayer - fair enough, but the fact that nobody realised that making the log-in mandatory would just piss people off is baffling.

    Toys 
  • Despite being a cherished and well-loved franchise, Transformers has made numerous mistakes throughout the years:
    • Gold Plastic Syndrome. A number of Transformers toys made in the late 1980s and early 1990s were, in part or in whole, constructed with a kind of swirly plastic that crumbled quite quickly, especially on moving parts. Certain toys (like the reissue of Slingshot, various Pretenders, and the Japan-exclusive Black Zarak) are known to shatter before they're taken out of the box. There are pictures of the effects of GPS in the article linked above, and it isn't pretty. Thankfully, the issue hasn't cropped up since the Protoform Starscream toy was released in 2007, meaning Hasbro and TakaraTomy finally caught on.
      • GPS isn't limited to either gold-colored plastics or the Transformers line. Ultra Magnus' original Diaclone toy (Powered Convoy, specifically the chrome version) had what is termed "Blue Plastic Syndrome" (which was thankfully fixed for the Ultra Magnus release, which uses white instead of blue), and over in the GI Joe line the original Serpentor figure had GPS in his waist.
    • Some translucent plastic variants are known to break very easily when stress is placed on them.
      • The first Deluxe-class 2007 movie Brawl figure has a partial auto-transforming gimmick that relies in internal translucent motion plastic gears, which tend to shatter and make the gimmick stop working. On top of that, the pegs attaching his arms to his shoulders are a different shape than the holes they're supposed to peg into. Thankfully, the toy was released some time later in new colors, which fixed all of these issues.
      • Several car Transformers from the War for Cybertron toyline have their entire hoods cast in clear plastic to make their windows and windshields see-through, including the hinges used for transformation. The stress placed on said part from playing with the toy as intended frequently leads to hoods and windshields cracking.
    • Unposeable "brick" Transformers. The Generation 1 toys can get away with it (mainly because balljoints weren't widely used until Beast Wars a decade later - though they were used as early as 1985 on Astrotrain - and safety was more important that poseability), but in later series, like most Armada and some Energon toys (especially the Powerlinked modes), they are atrocious - Energon Wing Saber is literally a Flying Brick, and not in the good way. With today's toy technology, there just isn't an excuse for something with all the poseability of a cinderblock and whose transformation consists of little more than lying the figure down, especially the larger and more expensive ones. Toylines aimed at younger audiences (such as Rescue Bots and Robots in Disguise 2015) are a little more understandable, but for the lines aimed at general audiences or older fans (such as Generations), it's inexcusable.
    • Toys over-laden with intrusive gimmicks, "affectionately" nicknamed "Gimmickformers", are generally detested. While these are meant to cater to a younger crowd, when a figure has so many things going on that these detract from the transformation, articulation, and aesthetics, even they may repelled by it. Such a figure is the infamous Transformers: Armada Side Swipe - featuring a boring (though passable) car-mode, a Mini-Con "catapult" that doesn't normally work, and a hideous robot-mode with excess vehicle-bits hanging off everywhere (including the aforementioned catapult on his butt), the posability of a brick, and the exciting Mini-Con activated action-feature of raising its right arm, which you can do manually. Toy-reviewer TJ Omega once did a breakdown on the figure, coming to the conclusion that its head was the only part not to have any detracting design faults.
  • Cracked has an article called "The 5 Least Surprising Toy Recalls of All Time", listing variously dangerous toys. Amongst them:
    • Sky Dancers, the wicked offspring of a spinning top, a helicopter, and a Barbie doll. It came out looking like a beautiful fairy with propeller wings - and a launcher. When those little dolls went spinning... well, let's just say that there's a good reason why nowadays, most flying toys like this have rings encircling the protruding, rotating wings. Their foam wings became blades of doom that could seriously mess up a kid's face with cuts and slashes. There's no way to control those beauties once they are launched, and it's hard to predict where they will go - which is why they're "Dancers"!
      • There was also a boys' version called Dragon Flyz. There are also imitators. They could be quite enjoyable - it's just that they were also surprisingly dangerous.
      • Surprisingly, the Sky Dancers toy design has been brought back by Mattel for a DC Super Hero Girls tie-in. Let's hope they learned from Galoob's mistakes.
    • Lawn Darts. Feathered Javelins! They came out in the early 1960s and were only recalled when the first injuries were reported... in 1988.
      • Charlie Murphy (Eddie's brother, best known for writing for Chappelle's Show) appeared on an episode of 1000 Ways to Die that had the story of a coked-up guy from the 1970s having a barbecue with his other drugged-out buddies (with the coked-up guy getting impaled in the head with a lawn dart after getting sidelined by a woman who just went topless) to comment on how the 1970s was a decade full of wall-to-wall health hazards, from people eating fatty foods to abusing drugs to playing with lawn darts (which most people did while under the influence).
      • "Impaled by a stray lawn dart" is also one of the "Terrible Misfortunes" that can befall your bunnies in Killer Bunnies and the Quest for the Magic Carrot.
      • If you want to be technical, lawn darts were really invented around about 500 BCE... as Roman weaponry.
    • Snacktime Cabbage Patch Dolls, a 1996 Cabbage Patch doll sold with the gimmick that its mouth moved as it appeared to "eat" the plastic carrots and cookies sold with it. The problem was, once it started chewing, it didn't stop until the plastic food was sucked in... and little fingers and hair set it off just as well as plastic food. The only way to turn it off was to remove the toy's backpack... something buried so deeply in the instructions, nobody saw it until it was announced publicly.
      • An episode of The X-Files took the idea and ran with it. There was also a Dexter's Laboratory episode where Dexter and Dee Dee find a "Mr. Chewy Bitems" in the city dump; Dex tries to recall why they discontinued the toy as Dee Dee runs around in the background screaming with the bear chewing on one of her ponytails.
      • The obscure comic book series Robotboy (not to be confused with the popular animated series of the same name) had an album in which the titular robotboy takes exaggerated versions of one of these to its home, after which they bring havoc and try to destroy the house. The quote from the corporate executive that ordered those toys to be destroyed sums the thing up:
        The idea was to give toys to kids so that they never had to clear away their stuff. What the manufacturer did not tell us however was that the toys cleared it away by eating them.
  • Easy-Bake Ovens have been around since the 1950s and are, as the name suggests, easy to use...but a 2006 redesign made the opening small enough to put a tiny hand in, but not take it out. Next to a newly-designed heating element. Ouch.
  • Aqua Dots (Bindeez in its native Australia) is (or was) a fun little collection of interlocking beads designed for the creation of multidimensional shapes, as seen on TV. You had to get them wet before they would stick together, but the coating released one ingredient it shouldn't have when exposed to water - a form of the date-rape drug GHB. Should someone put that in their mouths... This wasn't the fault of the company that made them, but rather the Chinese plant that manufactured the toys. They found out that some chemical was much less expensive than the one they were supposed to be using, but still worked. They didn't do the research that said chemical metabolizes into GHB, or else they didn't care (and also didn't tell the company that they made the swap). And yet, for all the Chinese toy manufacturer chaos that was going on in the media at the time, the blame fell squarely on the toy company for this. They still exist, though thankfully with a non-GHB formulation. They were renamed to Pixos (Beados in Australia) and marketed as "safety tested". In fact, they were marketed the same way Aqua Dots were, with the same announcer and background music (compare and contrast). Now, they are marketed in America under the name of Beados.
  • Chilly Bang! Bang! was a chilled juice-drink toy released in 1989 by Mackie International consisting of a gun-shaped packet of juice. To drink it, you had to stick the barrel in your mouth and pull the trigger. And if you thought Persona 3 was controversial...
  • The Dark Knight tie-in "hidden blade" katana toy has a hard plastic, spring-loaded blade in the handle that shot out with such force that it could cause blunt force trauma if the kids weren't expecting it and that can be activated by an easily-hit trigger in the handle. They were marketing an oversized blunt switchblade.
  • The DigiDraw promised to make tracing, an already simple act, even easier by placing the thing to be traced between a light and a suspended glass pane, projecting its image onto a blank piece of paper. Its ridiculously poor design meant that even if you could assemble it, the resulting projection was faint at best, and it would screw with your focus to the point where you couldn't do a perfect trace, assuming you hadn't already ruined it by nudging the paper even slightly. And trust us, we're not alone in this belief.
  • LEGO fumbled up their plastic around 2007, which resulted in nearly all of the lime-green pieces becoming ridiculously fragile. This affected the BIONICLE sets of that era greatly, which were already prone to breaking due to the faulty sculpting of the ball-socket joints. Since that line of sets had more lime-colored pieces than usual, it is needless to say that fans were not amused with the ordeal, as it meant that they couldn't take apart and rebuild their LEGO sets. Reportedly, some of these lime pieces broke right at the figures' first assembly.
    • In 2008, LEGO reacted to the fragile sockets by introducing a rectangular design and phasing out most of the old, rounded sockets. The problem only got worse. Any socket joint from 2008-10 has a high risk of breakage, and the toys don't even need to be played with or taken apart for this to happen, as the plastic cracks apart on its own. Sadly, many of the smaller figures were designed with large, one-piece limbs, meaning buyers had to replace the entire limb if their socket broke. Fans are split on what Lego might have been thinking with the '08 socket-type. One group believes they intended it as a fix but messed up spectacularly. Others believe the rectangular design was to ensure that the pieces would break in a predetermined spot and still allow the parts to be used for building - however, the pieces in question developed cracks in numerous spots. Thankfully, Lego seems to have learned their lesson - in 2011, they redesigned the sockets to be much thicker and sturdier.
    • Bionicle rubber bands suffered from this too. There were two kinds: the older, less durable variants with a rectangular cross-section, and the lot more durable and higher-quality rounded rubber bands. Early sets came with the older bands, and these tended to rot away within years of the figures' assembly, rendering the action functions of the sets entirely useless. The better-quality bands were introduced in 2002. For some strange reason, the 2005 Visorak set-line got divided into two sub-lines: the regular, which had colored canisters and the better kind of rubber band, and another variant, which came in black containers and had the older band. These broke or melted off with time, thereby taking away the sets' main gimmick, the Visorak's snapping pincers.
  • Marvin's Magic Drawing Board, which came along near the end of The '90s. It billed itself as a reusable scratchboard. Even if it were, in fact, reusable, simply putting a mark on it took the will of a thousand men. Much like the later DigiDraw, it was meant to do something simple, and it couldn't do that.
  • Rollerblade Barbie, which had a gimmick that made the skates spark. Slightly risky if used on lacquered vanities with hairspray in the air.
    • Bill Engvall discussed seeing this on the news once:
      "What if Barbie rollerskated through a pool of gasoline? What if Barbie had a hand grenade? Is that a common household problem now?"
    • For exactly this reason, there will never again be a Transformers toy with sparking action. Actually, maybe not.
    • Razor marketed, for a month or so in 2009, a scooter that came with its own spark generator.
  • Tie 'N Tangle, a 1967 Hasbro game based on wrapping other players in a web of nylon string, would otherwise be So Bad, It's Good based on its unintentional reference to bondage had it not been for its significant safety hazards: people can fall and hit their head, be strangled by the cord; etc. Even worse, the cord is too strong to be broken by hand, in case an emergency does happen. Jeepers Media suggests destroying this game, as its vintage worth is far outweighed by the hazards it possesses.
  • The infamous Harry Potter-themed Nimbus 2000 toy broomstick. Needless to say, there are certain issues with a toy that vibrates and is designed to be "ridden" between the legs. At least one of the intended customers found that their older sister developed a remarkable interest in it.

    Pinball 
  • The Twilight Zone pinball table is well-respected as a classic and brilliant design, except for one part: the skill shot. If the skill shot is made successfully, it launches the ball into the bumpers... which can then send the ball directly to the left outlane and out of play. Many veterans tell new players of this machine to deliberately fail the skill shot, as the small skill shot bonus isn't even vaguely comparable to the strong risk of losing the entire ball. Even the table's designer referred to this gaffe as not one of their shining moments.
  • When 6 ball multiballs were becoming popular as unique features, for the Apollo 13 pinball table, Data East decided to go beyond that and did a 13 ball multiball. Not only was it nearly impossible to hit anything with that many balls on the field, but the inevitable attempts to flip with 6-7 pinballs weighing down a single flipper quickly burned out the flipper coils and would make the entire machine unplayable until the coils were replaced.
  • Popeye Saves the Earth had two playable fields, an upper field and a lower field. For some reason, the upper field was placed over part of the lower field (usually upper fields are tucked into a corner of the cabinet). To alleviate this, the upper field was given a transparent bottom, allowing the lower field to be seen through it... but the field would inevitably be scratched up beyond visibility after enough playtime, ensuring part of the lower field would be impossible to see.

    Aircraft 
  • The World War II-era Blackburn Botha torpedo/patrol bomber, of which the government test-pilot's assessment began with the words "Entry into this aircraft is difficult. It ought to be made impossible." The rest of his report consists of tearing multiple aspects of the design a new one in his efforts to show why. Nor was it just the Botha. Blackburn in general had had a long history of producing what one aviation writer called "damned awful to fly aircraft" which were also aesthetic horrors, and to the very end of the war it continued to miss the mark, turning out aircraft which were either lemons or which would have been astoundingly good if only they'd been ready three or four years earlier. It took until the 1950s for Blackburn to finally turn out an aircraft that was a winner in every way, but the Buccaneer had to wait until it was almost ready for retirement to show its mettle on the battlefield (in Iraq, 1991). Although given that its original design mission had been delivery of nuclear bombs onto Soviet naval strike groups and high-value shore targets, this is probably just as well. Sadly, it couldn't really enjoy its success even then - the company was bought out by Hawker Siddeley a few years after the Buccaneer was introduced.
  • Early fighter planes were designed with the gun right behind the propeller. The obvious problem in this design was first worked around with deflector plates on the propeller blades. This was a dangerous system in more ways than one - ricochet from deflected bullets could damage the plane or hit the pilot, the plates could be destroyed by gunfire leading to destruction of the propeller, and it negated a significant portion of the gun's firepower. Other designs sidestepped the problem by having the guns in the wings, which had its own drawbacks (primarily the requirement to harmonize their ballistics to converge at the same point due to the wider distance between them and the frequent different ammunition types, thus completely ruining accuracy past the convergence point). The problem was finally solved by linking the trigger assembly to the propeller's axle, which allowed timing the gunshots to pass between the propeller blades without damaging them, or more rarely designing them to fire through a blast tube running the length of the axle.
  • A frequent problem with the USAF's new "Century Series" of fighters that went to Vietnam was that many of the early model jets had their backup systems right next to the primary system. Great for mechanical failure but these were combat aircraft. Many losses happened when one hit disabled both systems.
  • The McDonnell Douglas DC-10 was for the most part a good plane, but early DC-10s suffered from one serious design flaw: it was extremely difficult to close the cargo doors properly and there was no way to tell whether the door was closed and latched properly or it just looked okay, resulting in the doors occasionally blowing out during flight. This resulted in one incident over Windsor, Ontario where the plane in question barely made it back to Detroit and a crash in Ermenonville Forest in France that killed everyone on board. The latter was caused by McDonnell Douglas not actually fixing the problem after the former, but rather slightly redesigning the doors with a window and attempting to warn ground crews to look through said window to make sure the pins looked right, not taking into consideration that ground crews might not speak any language on the warning stickernote . Needless to say, after the latter, Douglas and the FAA got the message and Douglas redesigned the cargo door properly, but the DC-10's safety reputation suffered for the rest of its operational life, and the later tragic accident of American Airlines Flight 191, despite turning out to be the result of stupid maintenance practices rather than another design flaw, would seal its fate, eventually killing the DC-10.
  • On August 6, 2005, an ATR-72 (by no means a terrible aircraft) operating as Tuninter Flight 1153 unexpectedly ran out of fuel, and the pilots, in full Oh, Crap! mode, fruitlessly tried to restart the engines rather than feathering because the fuel gauge incorrectly showed that there was plenty of fuel, leading to the plane ditching with 16 fatalities just short of an emergency landing in Sicily. Turns out, maintenance guys installed a fuel gauge designed for the far smaller ATR-42 because the correct gauge wasn't logged in the database properly, and said database also indicated that the 42 gauge could be used with the 72. Why was such a staggering failure with such tragic consequences even possible? Because ATR got lazy and didn't bother to change the fuel gauge design for the 72, resulting in two different non-interchangeable safety-critical modules being compatible with sockets in two different planes. For obvious reasons, the ANSV recommended that a redesign of the fuel gauge be mandated to remove this compatibility.
  • The crash of Helios Flight 522, the result of the flight crew not realising that the pressurisation system was set to manual rather than auto, resulted in bereaved families suing Boeing two years later for the takeoff configuration warning and the altitude warning using the same alarm tone. The lawyer argued that this flaw had contributed to the tragedy by causing the flight crew to mistake a deadly emergency for an annoying malfunction, and that similar, albeit less deadly, issues had happened in Ireland and Norway.
  • Interactive Flight Technologies' Inflight Entertainment Network (or IFEN), installed in the Alitalia and—most infamously—Swissair fleets, was a pioneer of digital in-flight entertainment systems, but it was so rife with reliability concerns that it's astonishing it didn't end up a Trend Killer... especially given the fact it ended up killing people. IFEN was power-hungry, prone to overheating, heavy, unreliable, and couldn't easily be turned off by the flight crew in case of emergency. Qantas considered it but rejected it on the basis that the heat it produced kept corrupting the hard drives, and Alitalia were dissatisfied because they kept having to replace an average of three to five underseat units each flight. And then a Swissair plane caught fire and crashed off the coast of Canada, and wet-arcing from the IFEN system was implicated as the initial cause of the fire, causing the reliability concerns to turn into safety concerns; Swissair ended up disabling the system in their entire fleet less than two months after the crash. A year after the crash, the FAA banned IFEN on MD-11s due to the unsafe design and installation.
  • Class D cargo holds were, simply put, a terrible idea from the very beginning. The concept seemingly made sense: it was airtight and would cause any fire to exhaust all the available oxygen, so no need for pesky fire detection or suppression equipment, which would create extra wiring and be more expensive! What would happen if a fire was caused by an oxidizer obviously never occurred to anyone. That was, until 1996, when ValuJet's Miami maintenance contractor labeled several boxes of expired chemical oxygen generators "Oxy Cannister - EMPTY", resulting in ValuJet baggage handlers, unaware of what they actually were, loading them onto a flight to Atlanta and that flight catching fire and crashing into the Everglades. The pilots were only alerted by equipment failure and a Mass "Oh, Crap!" in the cabin when flames started to breach the floor, and by that point, the 110 people on that flight were already screwed. In 1998, the FAA, fresh off being chewed out by government officials, NTSB investigators, and victims' family members for not implementing recommendations from previous Class D cargo fires, gave airlines three years to convert all Class D cargo holds to Class C or E.

    Automobiles 
Examples here are less severe than The Alleged Car. Even otherwise-acclaimed models can suffer from poor design choices.

  • JATCO continuously-variable transmissions were common in many Nissans, and gained infamy for their lower than expected lifespan. After 60,000-80,000 miles, users reported the transmission going bad and being left with significant repair cost estimates. Nissan made the leap of faith to this CVT tech to keep up with the fuel efficiency wars in the automotive industry. One of the reasons for the transmissions failing prematurely was inadequate transmission coolers and Nissan pairing these transmissions with engines that were too powerful for the transmission such as the relatively powerful VQ35 V6 (exacerbating the transmission's overheating). This was one of the signs of the Audience-Alienating Era of the company, as budget cuts and mismanagement in the late 2010s introduced multiple idiot designs that crashed the company's reputation. However, the JATCO CVT was by far one of their most infamous choices, maring otherwise decent vehicles.
  • Certain models from Honda can still be desirable, but may be marred by premature automatic transmission problems so research before purchase, finding one with a manual, or a warranty may be advised.
  • Gasoline direct injection had troubles in its initial public introduction. By removing the fuel injectors from behind the valves and replacing them with high-pressure injectors aimed right in the cylinders, power and efficiency was improved. However, gasoline is also the ideal solvent for keeping the valves clean and GDI may lack this valve-cleaning system. Oil may build up on the valves from the PCV system and disrupt air flow or worse, eventually break off and harm the catalytic converter.

    One of the remedies to this maintenance worry is for the maker to add a set of traditional fuel injectors that are solely for cleaning duty. Otherwise, an aftermarket oil catcher can intercept oil particles and only needs emptying periodically. Considering the depth of experience in engine building industry, one may wonder why this quirk of the lubrication system was overlooked (besides the budget department possibly cutting costs in quality assurance).
  • Chrysler Motors
    • LH engine is one of the most infamous in the company's history. The oil passages in the engine are incorrectly sized which leads to the possibility of oil sludge accumulation which can starve the engine of oil (even with correct oil change intervals). To make matters worse, the internal water pump was another point of failure as the seals could rupture due to poor build quality, leading to coolant mixing in with your oil. Finally, when all the reports of engine failures started rolling in, Chrysler reps were dismissive and could even insinuate that the owner failed to change their oil in a timely fashion, leading to many owners ending their loyalty to the brand.
    • If you have a Dodge Caravan with a central console, beware of laying credit-card-sized items on the rolling shutter that conceals the 12V outlets. They can easily slide in, even on later models like the late-2010 models where you'd think a design quirk like this would have been addressed much earlier. Drivers have reported losing credit cards this way when it was laid on the shutter closest to the dashboard. Fortunately, it's not too difficult to disassemble the center console partly to retrieve lost items, but one must still be careful of disturbing the wiring, and it's a little too involved for what should be a more user-friendly retrieval.
  • Toyota cars from 2007 - 2013 may have had engines that were fitted with incorrectly sized piston rings. This led to cars with poorly-fitted engines to burn oil an embarassing rate. The good news is that Toyota covered refits to the engines under their powertrain warranty, but it can boggle the mind how the engineers mismeasured this vital aspect of to the engine's compression.
  • Tesla is known for this, especially in its cars, but one of the biggest recurring problems is the trend to removing mechnical overrides. In particular, there is no mechanical way to open the rear doors of the Model Y with the power off (and in prior models, it wasn't that easy either), which, in an emergency, is the last thing you want to find out.

    Locomotives 
  • Early model CIE 001 Class and WAGR X class locomotives used the same dubious engine design as the BR Class 28, but unlike the Class 28, both types went on to have long and successful careers once the issue was resolved and their motors replaced.

    Weaponry 
  • The Schwerer Gustav, used by the Germans during World War II, was, and still is, the largest artillery cannon ever made, sporting a 32.5-meter-long barrel and firing the heaviest shells of any weapon ever developed. Unfortunately, firing such powerful projectiles wore out the barrel quickly: after around 300 shots, the barrel had worn down and was no longer usable. About 250 of these shots were fired during testing, meaning it was only fired 48 times in actual combat (granted, these few shots did prove to be as devastating as one would expect an 800-millimeter shell to be). Its designers had foreseen this and sent a spare barrel with the gun to the front, but by the time it was needed the army was beginning its retreat. The difficulty of deploying note  and operating note  the Schwerer Gustav meant that it never got a chance to be used again afterwards.
  • Dog bombs were supposedly used by the Soviet Union in World War II. They were trained to go under enemy tanks, so the bombs could detonate underneath them. One version of the story goes that, since the Soviets didn't have German tanks to use for training, they simply trained the dogs on their own tanks - between that and the different engine functionality and smell of the gasoline-powered German tanks and the diesel-burning Soviet ones, the dogs in actual combat would seek out the Soviet tanks that looked and smelled familiar to them. And that's when they weren't proving to be smarter than their trainers accounted for and, upon noticing all the gunfire and explosions, deciding anything more than simply dropping their payload right there and running back into the trench wasn't worth it.
  • The British No. 74 Grenade, also known as the Sticky Bomb, was meant to be an anti-tank weapon made to compensate for Britain's lack of AT guns after many of them were left behind during the Dunkirk evacuation. It was hard to throw due to its weight, so the best ways to use it would be either to drop it onto a tank from a point of elevation, or to run right up to the tank and slap the bomb onto it. That's right, the recommended use involves running up to an enemy tank in the middle of combat, which is as dangerous as it sounds, especially since the user would then have to run out of the blast radius in the next five seconds and hope that the explosion didn't launch the grenade's handle back at them like a bullet. Worse, the adhesive used on the grenade had a hard time sticking to tanks that were covered in dust or mud - which, considering the conditions of a battlefield, was nearly all of them. What it did stick to easily, however, was the uniform of the soldier trying to use the damn thing, leading to several incidents of panicked soldiers desperately trying to strip themselves of clothing while hoping that the grenade didn't arm itself. Talk about a Sticky Situation!
  • The AIM-4 Falcon guided missile (used in Vietnam) was infamous for how it hard was to lock; it required seven seconds to establish a lock, which was vaguely acceptable in its designed usage of tracking a slow-moving and ungainly enemy bomber, but is no easy task when trying to target another fighter whipping through the sky with you at or above Mach 1. The missile was also rendered useless if the user failed to achieve a lock the first time, as it had a very limited supply of coolant for its targeting sensor, its nitrogen bottle being smaller than the ones used for the heat-seeking AIM-9 Sidewinder. It also lacked a proximity fuse, meaning it had to get a direct hit to detonate, and had a very wide field of view - so wide that it was quite possible that the missile would not be able to maneuver to actually hit a locked target. In a 20-year service history, it had only five confirmed kills. Its successor, the Sidewinder, was a significantly more effective (and easier to use) missile, with almost all of its early problems solved simply through improvements to the heat-seeking sensor.
  • The Type 94 Nambu Pistol deserves a special mention: the trigger sear which is used to fire the gun is exposed, and it doesn't take much effort to trip it. This makes unintentional firing of the gun way too easy.
  • Heckler & Koch's G36 rifle effectively encapsulated a shift to using polymer furniture wherever it can be used in firearms, which in the G36's case includes basically everything outwardly visible on the gun other than the charging handle (which is an exposed portion of the bolt carrier along the top of the weapon). This would normally be fine, as polymer frames and receivers had already proven to be workable, even when riding against steel parts as in the case of guns like the Glock. The problem is that the specific polymers used for the G36's foregrip and receiver are the kind that soften when they get hot from, say, being right up next to a thin gun barrel as it has 750 rounds fired through it per minute. If the points of contact between the barrel and the inside of the receiver and foregrip were properly insulated, this likely wouldn't be a problem, or that big of one. But that's the thing - the points of contact weren't insulated at all. Testing by the German army determined that firing a single 30-round magazine through the weapon in under a single minute heats the weapon to the point that subsequent shots will deviate as much as six meters away from the point of aim when firing from its maximum effective range.
    • Testing by YouTube channel InRange TV wanted to test this alleged design flaw, using both (newer) American made and (older) German made parts. While there was a small amount of shifting in the grouping of shots fired from the German made parts, the ultimate conclusion is dumping a magazine's worth of ammo won't make the gun horribly inaccurate. They've concluded that the barrel caused the slight shifting, not the trunnion (the part that holds the barrel to the receiver) shifting or the receiver itself warping. And even then, the shifting isn't enough to make the rifle not combat effective at the ranges it needed to be effective. A barrel made with modern manufacturing technologies and metallurgy will also render the grouping problem moot.
  • For exertion by American soldiers in the armed forces during World War I, the Chauchat machine gun was remodeled to sanction the use of .30-06 cartridges. However, due to a hastened production cycle, it quickly became known as one of the worst firearms ever, as it took the Chauchat with its numerous flaws (questionable ergonomics, plenty of open space for dirt and debris to get into, and flimsy magazines), and did a poor conversion to .30-06, with chambers often cut short, which would cause the cases to be stuck, and in many cases the rims would be torn off by the extractor. Many of the guns wouldn't pass factory inspection, and those that did go into front-line service were discarded almost immediately for technically inferior but infinitely more usable weapons; even automatic-riflemen squads would frequently excise the "automatic" part and go back to bolt-action Springfield rifles.
    • After World War II came the T24, an experimental version of the MG 42 machine gun converted to .30-06. If they were custom-built for the round there probably wouldn't have been an issue, as the base design did prove to be relatively adaptable (as seen with other post-war variations downsized to 7.62 and 5.56mm NATO), but that's the problem - they weren't custom-built for the round, they were made out of existing machine guns with the bare minimum of newly-manufactured parts to convert them to the longer .30-06. Two prototypes were built and neither were acceptable, with one of the prototypes suffering 51 stoppages within 1,500 rounds.
  • The USFA Zip-22 was so bad it killed the company. The weapon was designed to be “the safest gun ever”, and it technically is, in that you can’t fire the thing without it jamming half of the time. This video goes into detail as to exactly why it’s so bad, but some major examples include the fact that the weapon is unwieldy, very picky about its ammo type, all but guaranteed to jam when using extended Ruger 10/22 magazines due to their follower spring feeding too slowly for the Zip, with no extractor or ejector to properly remove cartridges or spent casings from the chamber (relying seemingly on firing pressure and random physics to hopefully fling casings out correctly), and cocked by placing your fingers disturbingly close to the barrel on two awkward little sticks, with extremely stiff springs underneath that make it all but impossible to charge the weapon without placing your hand in front of the muzzle. When Forgotten Weapons tried to fire it, the thing performed quite well - for one magazine of hypervelocity ammo (and "quite well" still including one jam) before later mags started jamming on every other shot, with one failure independent of the ammo so catastrophic (part of the gun locked up on itself in a way that required disassembly to unlock it - and for the record, they never found out what actually caused it to lock up like that) they had to call it a day, and an addendum after Ian praised it for "remarkably good performance" noting that when they came back the next day for slow-motion shots of the weapon cycling, they never got it to fire more than one round before it would jam, lovingly demonstrated with all the inserts of slow-motion footage they did get showing it failing to extract, double-feeding or locking up.
  • The Standard Manufacturing S333 Thunderstruck, a double-barreled .22 Magnum revolver. While Karl Kasarda of InRange was initially intrigued by the concept, during testing he found that the gun had such horrible accuracy that it could barely hit reliably at 10 yards. After much testing and anecdotes from other owners, it was determined that Standard Manufacturing had been boring the rifling of both barrels simultaneously and one of them was badly misaligned, causing one barrel to be fairly accurate but one to send bullets spiraling sideways.

    Streaming Services 
  • Do you want to know why YouTube compilations are often so long? For whatever reason, you can only place one midroll ad on videos that are under eight minutes. Anything over eight minutes, you can place as many ads as you want. It doesn't help that YouTube's algorithm favors long videos, especially when they get good watch time from people who watch all the way through. This limitation is seemingly there to reduce annoyance at too many ads in a short time, but if YouTube were truly concerned by that, they would do something like scale the number of allowed ads with the length of the video instead of going from one to infinite at the eight minute mark.

  • Wanna watch dubbed anime on Crunchyroll? Well, good luck with that. Aside from dubs being very scarce on the sitenote , it doesn't do a good job of telling you what has a dub, and what doesn't. For example, Dragon Ball Z has only the dub with no indication of it being that way, Slam Dunk's dub is its own separate listing for some reason, while most other shows have the dub hidden away as a separate season, which isn't readily apparent in the least. For comparison, other streaming services such as Netflix handle this the same way MKVs do. The viewer is given the option to swap between audio tracks on the fly through a menu, as well as letting them turn off subtitles save for instances where they're necessary, such as song lyrics or non-English text within the anime itself.

  • Disney+ does not label upcoming titles as "coming soon". Despite this, the search function still indexes them, so they'll still show up in search results. So a user could potentially see a title and plan to watch it later, only to find out that the title is not available yet.
    • Early in the service's launch, pre-2009 episodes of The Simpsons were zoomed in instead of being pillarboxed like most other 4:3 shows on the service. Not only did many viewers find this unappealing, but there were a few instances of visual gags unintentionally getting lost because the zoom in cut them off. Disney later came out with a statement saying that the issue would be rectified some time in the future. On May 28 of that year, a "Remastered Aspect Ratio" toggle was added, thus giving users the option to watch the episodes pillarboxed.

    UI 

You know you've failed when your user interface doesn't even do its job correctly.

  • So the designers of the Soviet Phobos space probe left testing routines in the flight computer's ROM - fair enough, everyone does the same, because removing them means retesting and recertifying the whole computer, which generally would be plainly impossible without said routines. But to design the probe's Operating System in such a way that a one-character typo in an incoming command would accidentally trigger a routine that turns off the attitude thrusters, making the spacecraft unable to point its solar panels at the Sun and recharge its batteries, effectively killing it, takes a special kind of failure.

  • Linpus Linux Lite, as shipped with the Acer Aspire One. Now in fairness to Linpus, its GUI could not possibly be more intuitive (plus a boot time of just 20 seconds and recognizing out of the box xD picture cards as well as others beyond SD ones, if the computer has a reader that supports them), but there is a difference between designing a distro for complete beginners and designing a distro with several directories hard-coded to be read-only and Add/Remove Programs accessible only by some fairly complex command-line tinkering. That the sum total of its official documentation is a ten-page manual that contains no information that can't be figured out by an experienced user within five minutes of booting doesn't help as well as that updating it was hell, plus long (several minutes) boot times if you did not turn the computer off properly.

  • Xandros Linux, as used by Asus in the first few models of their EeePC line - one of the distros Linpus was developed to compete with - was a little better, but not all that much. Aside from the fact that big parts of it weren't actually open-source - notoriously a big no-no in the Linux world - you still needed command-line magic to switch it from its idiot-proof UI into something a non-newbie would want to use. The "advanced" UI required delving into cryptic text-based files for configuration, and community-made utilities (that may or may not have been competently programmed) were necessary to access some features. Compatibility with existing software was okay-ish at start but became spotty later on. Eventually Asus saw the writing on the wall and started phasing out Xandros entirely; they switched to shipping EeePCs with Windows exclusively and started neglecting the existing Xandros user base. Anyone trying to use Xandros past that point would boot into the computing equivalent of a ghost town, with no updates, aging repositories and security issues piling up. Eventually Xandros was retired, and common advice for people getting second-hand EeePCs was "wipe it immediately and install anything else".

  • The music notation software Sibelius grew from a useful utility into a monster of crashy bloatware with one of the least intuitive interfaces ever released to the general public. Witness the horror.

  • Facebook publicly displays the response time and reply rate for instant messages to business pages. A good idea in practice, if not for the fact that it will penalize you for not replying to each and every message that your page is sent, which includes flame mail, spam messages, people capping off exchanges with short messages to the effect of "okay, thanks!", or even just emoticons.
    • Some questions to Facebook support ask why it's possible to drag and drop entries in the TV Shows and Movies categories, but not other categories like Apps and Games or Music; there is no official answer. In previous versions of the Likes pages, drag and drop was possible in all categories, but this has remained disabled.

  • Many, many pieces of PC gaming hardware feature extremely fancy graphical interfaces and special effects for their drivers. Although they might make the hardware look attractive for the few minutes people will spend setting them up, this also has the effect of consuming extra system resources and thus interfering with the actual games the user might want to play.
    • The Razer StarCraft II series of hardware was perhaps the worst example of this. The Razer Spectre mouse, for instance, had such an intrusive driver that a patch had to be issued for Starcraft II itself to prevent the slowdown it caused. The Razer Marauder keyboard, meanwhile, not only used the same intrusive driver, but when Razer sponsored their own team for Starcraft II, the keyboards they issued them with were... Razer Blackwidows, which didn't use the driver.
    • This is an issue with many "gaming" branded computer related products, be it hardware or software. The biggest irony is that most gamers want the best performance out of their computers and setups, but for some reason, companies that sell gaming products tend to design these things in such awkward ways that it inadvertently takes away from that. Sure, that cool RGB lighting setup that changes colors depending on the game you run or even depending on the situation in that game may be nice, but it probably requires software running all the time to get it working. Or sure, that heat sink may look cool and edgy, but it's not an optimal design so it can't cool as effectively as a boring straight finned one.

  • A previous version of Telenor's email service had a "report spam" button that deleted the selected message(s) in addition to reporting it/them as spam. The problem is that there was no confirmation before it irreversibly deleted the message(s), and the button was right below the "move email" button. You'd better not misclick while moving something important!

    Miscellaneous 
  • The stainless steel teapots and coffee pots commonly found in British cafes and, notoriously, on British Rail trains: while they look like design classics in brushed steel, even the handles are made out of bare steel, which is very good at conducting the heat of two pints of liquid heated to boiling point. If you're lucky, the designer will have placed a thin layer of insulating foam between the pot and the handle, meaning that the handle will warm up, but not uncomfortably so; otherwise, you'll end up with a pot of tea that you cannot even lift until it cools to lukewarm. British comedian Ben Elton quoted this among other examples, and speculated that the British government has a department founded to butcher the designs of simple everyday tools.
  • Some drug awareness organizations, such as D.A.R.E., would give out pencils to students after their presentation, which had the message "Too cool to do drugs" printed on them. These pencils had a hilarious flaw: as they were sharpened, bits of the message would be shaved away, eventually leading to it reading "cool to do drugs", then simply "do drugs". After a child noticed this, new pencils had the message printed the other way so "drugs" would be the first word to get shaved away.
  • The Fabuloso brand of cleaning products has attracted some internet infamy for its packaging, which at a glance looks almost identical to a bottle of fruit-flavored drink. Imagine pouring yourself a glass of floor cleaner instead of punch...
  • The infamous Hawaii missile false alarm was caused by this; apparently the "real missile alert" button was right next to the "test missile alert" button, with no confirmation before sending it. There was also no easy way to send a follow-up message clarifying the false alarm, resulting in 38 panic-ridden minutes until they could arrange to send the message manually.
  • The Juicero, a $700 (later $400) cold-press juicing machine, had a buttload of crappy design decisions that led to it folding mere months after its launch:
    • The machine itself is not actually a juicer, but a large press. It only worked with pre-approved, overpriced packets that had to be ordered from Juicero's website and had a limited shelf life. Not only were you paying more for the machine, but you had to sign up for a subscription plan that, at its absolute cheapest, came out to $1,600 per year. In the event that you couldn't (or didn't bother to) buy the packets, this expensive machine was functionally useless.
    • It had a needlessly-complex setup procedure. To start with, there's online DRM on a juicer. Those who bought it were required to set up an account and connect to a cloud-based service in order to activate it in the first place. Don't have easy access to an internet connection? Too bad. The excuse for adding the DRM was that it would prevent you from using spoiled juice packs... but the machine would brick if you tried to use an expired package, and the company never bothered to fully answer questions about how the codes would function in the event of a sudden food recall.
    • The reason why it was so expensive is made clear by examining the hardware: the machine is filled with custom machined parts, expensive steel gears, a completely custom power supply (that had to have been certified, creating additional cost), expensive molded plastic for the sleek outer shell, and needlessly complicated design: it took over 23 parts just to hold the door closed. A lot of this is also due to the odd design choice of extracting the juice by spreading the force over the entire bag, like closing a book. Anyone with any knowledge of high school physics knows that pressure is inversely proportionate to surface area, meaning you need a lot more force and thus a much more powerful mechanism to provide the same amount of pressure, hence why the bags can be squeezed by hand. Yes, that's right - the juice bags can be squeezed by hand. The company wanted you to drop hundreds of dollars on an overpriced device and endure its needlessly complicated setup just to have the overengineered thing perform a task you can easily do manually.
  • A water bottle shaped like a soccer ball, created to promote the 2018 World Cup, made the news in Russia due to its shape focusing beams of light similarly to a magnifying glass, making it a fire hazard if exposed to the sun.
  • The Tapplock smart lock has a lot of problems, including brittle materials and a ludicrously easy-to-hack security system, but what made it infamous was simple: it's held together with external screws. Yes, this means you can disassemble it with an ordinary screwdriver. The company's initial response was "The lock is invincible to people who do not have a screwdriver."
  • Emojis have one major flaw with them: whoever made your device has complete control over what they look like. Fortunately, they’re consistent for the most part, but there’s been some very odd cases of Lost in Translation for those on the receiving end...
    • Apple devices initially flat-out hid the Vulcan Salute (🖖) and Flipping the Bird (🖕) from you. They displayed just fine, they just weren't selectable. The middle finger was at least understandable as a form of Bowdlerization, but the Vulcan Salute couldn't be problematic by any reasonable stretch of the imagination, save for perhaps copyright infringement.
    • Samsung devices used to display cookies (🍪) as a pair of saltine crackers. This caused some confusion when Cookie Monster's Twitter account celebrated World Chocolate Day with some chocolate chip cookies, represented by the emoji. It wouldn't be until February 2018 when they finally fixed it (and many others, for that matter).
    • By 2018, Samsung and Apple devices had decided to Bowdlerise the gun (🔫) for whatever reason. What looked like a realistic pistol on some devices (it's even named that, and still has that design on the Unicode code charts) now looks like a laser gun or a water gun, respectively. The same happened on Windows 10 desktops. Most applications will show a font-colored laser gun, while web browsers show a green water gun with orange "magazine". As this article shows, this could go horribly wrong if, say, you were planning a water gun fight at the park, and used the emoji, making it look like you were planning a mass shooting to users of other devices. Eventually, by the end of 2018, other platforms such as Android, Windows, Twitter, and Facebook also changed their firearms into toy pistols, while the EmojiOne set put out both, with the former design being default and the latter optional.
    • Graphic designers seem to really like changing the context of the drooling face (🤤). Because it can only convey so much, apparently.
    • There's a character supposedly called "face with a look of triumph" which, on Windows and Android at least (plus this character database), actually represents an angry face steaming from the nose (😤).
  • Some Nokia dumbphones have the speaker located not above the screen as they should but in the back instead, which even if that can be fixed just turning the phone means everyone will hear your interlocutor, specially if they like to talk loudly.
  • Deathdapters, as Big Clive calls them, allow the user to plug things in in various dangerous fashions, making them inherently unsafe, but there's one type that is particularly dangerous due to a particularly badly thought out all-in-one design. Namely, the American-style pins at the top and the British-style pins at the bottom, both spring-loaded and both live when they're in place, resulting in the potential for electrified pins suddenly popping out and causing a short-circuit with the wall or the user fatally shocking themselves attempting to put them back in.
  • The line out of Paddington into Ladbroke Grove was once home to a badly designed sea of nonstandard reverse L signals, one of which, SN109, was particularly troublesome as it was prone to phantom signals - danger signals that looked like caution or even go signals due to the angle of the sun - and was the last signal a driver would see. Worse, overhead electrification equipment obscured signals further. Prior to the Ladbroke Grove rail crash, which was caused by precisely this, SN109 had been passed at danger eight times in six years. Multiple drivers tried to tell Railtrack that SN109 was dangerous, but this proved fruitless because Railtrack was a Vast Bureaucracy consisting of several private companies who had little knowledge of what any of the others were doing, and the employee with the authority to track actions didn't have the authority to check work had actually been done, meaning it went unfixed until an inexperienced, poorly trained driver who passed SN109 at danger - unaware that this was a problem signal because the trainer wasn't able to teach him routes - just happened to be in the path of another train, resulting in the aforementioned crash. Because of this and several other crashes, Railtrack was dissolved because the public and government began to see it as an Incompetence, Inc., resulting in the regulatory side of rail travel being renationalised, and SN109 is now a single-lens signal.
  • AMC's High Impact Theatre System is a perfect demonstration of how not to build economy screens. A typical cinema screen has the speakers located behind it, with tiny holes in the screen to let the sound pass through. The HITS instead used a curved, non-perforated screen which would reduce costs by requiring a smaller projector bulb compared to other screen types. However, the speakers had to be placed above and below the screen due to the lack of perforations. This resulted in all sorts of sound issues, including phasing problems and, most notably, the onscreen characters' voices not sounding right due to the weird speaker placement. There was also the issue of geometric distortion created by the way the screens were erected, and not only was the contrast affected badly, hot-spotting was also common. Put them all together, and there's good reason why industry insiders took to calling it the "SHIT System". Tellingly, AMC later put a lot of expense into eradicating this format, because after years of customer complaints even they understood what a bad idea it was in hindsight.
  • The Harmon hotel in Las Vegas was up for a mere six years before being demolished, and was never open to the public. The reason why it didn't last long was that the builders severely screwed up the steel reinforcement of the concrete structure, so the height of the building had to be reduced from 49 to 28 floors because of that. It was later discovered that even at the lower height, the building was not stable and would have collapsed at the first earthquake. The owners decided to demolish while suing the builder and sold the land.

Top