> So, which pins could be combined with SDIO's three? After much thinking, the solution is obvious. RAM's nCS can be the SD card's CLK. RAM's CLK can be the SD card's CMD. RAM's MOSI can be the SD card's DAT. Try and figure out all the possible interactions with each device and what that would look like to the other, to convince yourself that it will work safely.
This is truly a brilliant hack, well worth publishing at Hacker News.
There was an old joke. University math class. Professor writes a huge formula on the board. Says, "and from this, it is obvious that," and writes another very different large formula on the board. He stands for a second, says "hm...", and walks out of the lecture hall. He returns 30 minutes later, throws a stack of papers freshly-covered in writing unto the desk, mutters "yeah, that indeed was obvious", and continues the lecture.
My dad actually did a form of this, at the encouragement of a teacher.
He was working out the proof to a problem in college and got stuck half way through. He had the problem and the answer, so he worked it from both ends and got stuck in the middle. He had to present the process in class, so he went to the professor for help.
His professor (who had done it originally) couldn't remember how anymore, and told him when he got to that point, he should say "and from this, it's obvious that..." and just jump to the next step.
That's exactly what he did, and no one in class (half hour into the class) even noticed.
"It is obvious that..." usually means "with large amounts of algebra but very little actual thinking" when properly used in a math textbook or lecture. It has its utility.
I'm always a bit saddened to see that a separate chip is the go-to method to interface with USB. Unfortunately USB is an incredibly-complex protocol that it seems anything beyond a basic V-USB running USB 1.1 at low-speed is generally not doable without specialized hardware and a significant software stack. Meanwhile a protocol like SPI is ridiculously simple...the minimum hardware needed is a shift register that can be clocked fast enough. I miss how desktop and labtops used to have an exposed serial and parallel port, which could communicate at this low level. I often wonder if instead of USB existing that we instead stuck with UART, I2C, or SPI multidrop (using a small set of standard clock rates) for simple peripherals (maybe over a single connector like the 4-pin JST SH cable for Stemma QT, Qwiic and Grove) over a short distance, and then jumped to IEEE 802.3 Ethernet links for data-heavy peripherals like monitors and external drives. Then instead of having to have separate support for USB and Ethernet, you just would support Ethernet links.
> Meanwhile a protocol like SPI is ridiculously simple
Yes, it is. It was intended to require as little silicon as possible to minimize the cost to the transistor budget. SPI doesn't contemplate power supply, hot-plug, discovery, bit errors, or any other of a host of affordances you get with USB.
I think there is some value for software developers to understanding SPI and the idioms used by hardware designers with SPI. Typically, SPI is used to fill registers of peripherals: the communication is not the sort of high level, asynchronous stuff you typically see with USB or Ethernet and all the layers of abstraction built upon them. Although there is no universal standard for SPI frames, they do follow idiomatic patterns, and this has proven sufficient for an uncountably vast number of applications.
I feel like you could support hot-plug, discovery, and bit errors with a protocol orders of magnitude simpler than USB, something you could bitbang on an ATTiny45. (And without Neywiny's list: 'bit ordering, half-duplex, the 4 modes, chip select to first clock setup times, bits per word, "strobing" nCS between words.' Those incompatibilities are mostly just people taking shortcuts.) And, unless you're talking about USB-C PD, which isn't involved here, the power-supply question isn't really related to the software complexity; it's just a question of making the power and ground traces on the USB-A plug (and the corresponding parts of other connectors) longer than the data traces so they make contact first.
You couldn't make it quite as simple as (a given flavor of) SPI, but something close to I²C should be feasible.
The main reason it's near impossible to bit-bang USB is that all devices to are required to use one of a few fixed clock rates (1.5 MHz, 12 MHz and 480 MHz), unlike SPI and I²C which allow variable/dynamic clock rates.
If you simply remove this restriction, bit-banging USB would become trivial, even with all the other protocol complexity.
Though, I think USB made the right call here. The requirement to support any clock speed the device requested would add a lot of complexity to both hosts and hubs.
Only supporting a few fixed clock-rates makes certification and inter-device compatibility so much easier, which is very important for an external protocol. Supporting bit-banging just isn't that important of a feature, especially when the fixed clock rates really are that hard to implement on dedicated silicon.
I never thought about making hot pluggable SMBus peripherals. It's an interesting idea. Many motherboards even have headers broken out, and of course operating systems already have drivers for many SMBus peripheral types. LoFi USB.
We have hot pluggable I2C at home. Every HDMI port (and pretty much every DP port with a passive adapter) has I2C on the DDC pins. The port also provides 5V 50mA so your MCU doesn't need external power. Example: https://www.reddit.com/r/raspberry_pi/comments/ws1ale/i2c_on...
Perhaps there is a place for a simpler alternative. My comment was pretty tangential to this discussion about the merits of SPI vs USB vs whatever. My point is that I believe some benefit can be had by software developers in understanding how components can be integrated together using a primitive as simple minded as SPI. I used the qualification "some" again, as well. I don't offer any revolutionary insights, but if you survey how SPI is used in practice, you'll learn some things of value, even if you never use SPI yourself.
I think there's something like a dual of the uncanny valley when it comes to protocol complexity vs adoption. Really simple like UART, I2C, or SPI, and engineers will adopt it on their own. But once you start wanting to add some higher level features, engineers would just as soon reinvent their own proprietary schemes for their own specific needs (the "valley"), and the network effects go away. So to create a more popular protocol you end up with a design committee where everyone piles in their own bespoke feature requests and the thing ends up being an ugly complex standard that nobody really likes. But at least it gives it a shot at wider adoption to prime the pump of network effects. (Or maybe it's more akin to how Java beats the Lisp Curse by using the social effect of having to raise an army to do anything?)
Isn't that basically what USB is? At least if you stick to USB 1. Obviously, since that time, it's expanded to cover a wider range of capabilities. It's a half-duplex serial line, just like I2C. Unlike I2C, it's asynchronous, like a UART.
Very true. And dealing with bit ordering, half-duplex, the 4 modes, chip select to first clock setup times, bits per word, "strobing" nCS between words, the list goes on. But when you see "USB 1.1 device" you know a large majority of what it can support and what it'll do.
While true it does seem like the flexibility allows for good optimizations. For example, a lot of devices don't need addresses or have memory maps etc. So, sadly while it's a pain to deal with, it does make things very fast and efficient.
Interesting. I recently tested a 6ft USB3 cable and an attached drive. The transfer of a 1TB file failed a few times (not sure of the details). This is strange, since the cable couldn't have been that bad (?) and the 16-bit CRC should have caught those errors (assuming an error will trigger a resend of data). Any ideas what the issue could have been? Does Linux provide a way to view the error rate?
I believe your confidence in the 16-bit CRC is excessive. There is a 1 in 65536 chance of a 16 bit CRC failing for certain types of corruption in 512 byte bulk USB packets, and there are about 2 billion packets in a 1TB transfer. If the BER is high, corruption of the transfer is not surprising.
A 6ft cable should be fine, assuming it is well designed, manufactured correctly, in good condition, and not in close proximity to high noise sources, such as SMPS. If any of those factors are compromised the BER will increase, and you will then be testing the rather limited capabilities of 16 bit CRC.
USB4 has 32 bit CRC for data payloads for a reason. In the mean time, the #1 thing you can do is use short, high quality cables.
https://wiki.wireshark.org/CaptureSetup/USB says "Software USB capture captures URBs (USB Request Blocks) rather than raw USB packets." So Wireshark couldn't give you the CRC, which would be stripped away.
You could hypothetically use a really-really high bandwidth oscilloscope (like 2 GHz to view 480 MHz USB HS signals), but those are expensive. So you would have to resort to using external USB sniffer...out of curiosity I found someone made a sniffer that is basically a USB-capable microcontroller plus an FPGA and a USB PHY: https://hackaday.com/2023/06/13/cheap-usb-sniffer-has-wiresh...
I work in the AV industry. RS-232 is still the king for control signals between devices, even on brand new hardware that costs >10K USD. TV screens for signage/conference rooms often have RS-232 for more versatile control than HDMI-CEC.
Higher bitrate than 9600 BPS is often not needed. The most common connector consists of three-pin screw terminals (Tx, Rx, GND), although these days most installations have at least one RS232-to-USB adaptor somewhere. And for larger rooms, RS232 is bridged over Ethernet.
This was a bit of a surprise when I started, but then I realised that many installations are decades old, with components having been replaced individually.
The article goes through a long list of 8 pin chips but ignores the very popular $0.10 CH32V003, which has 2k RAM and 16k Flash running at 48 MHz and 1 CPI -- or the new CH570 (I have a dev board on the way) which is also $0.10 in SOIC8 but now runs at 100 MHz with 16k RAM and 256k flash and has USB and a 2.4 GHz packet radio.
oops .. my brain of course knows but my fingers didn't in this instance. Fixed.
I think it's compatible with the old nRF24 chips -- I'll test when mine arrives in a week or so. The CH572 version has BLE5 ... I think the same hardware but including a software stack.
Well yeah, nowadays high-end micro-controllers may have an integrated USB HS PHY (notably STM32F7's and the MIMXRT1060 used in Teensy 4, and many others), but the basic cheap attiny-like or ice40-like hasn't and most usually require going through an external PHY. I've been wanting to get into using the CH32V305 cause it is in a hand-solder-friendly TSSOP-20 package and has integrated USB HS PHY but I hear it doesn't have a software support and I don't see it on microchip/digikey/etc. Though we may soon have easy access to 20-cent microcontrollers with USB HS, but still the protocol feels incredibly complex and way overkill for simply interfacing a peripheral to a computer.
I still install brand new computers with serial ports! Dell sells us OptiPlex towers and we occasionally order them with a serial card to connect to legacy scientific instruments.
I bought a Lenovo mini thinkstation with a serial port because I thought it would be cool. But I don't even know what cool stuff 8 can plug there, except for a serial console
So you want to replace a USB PHY with a serial to Ethernet converter and an Ethernet PHY.
The reality is that the simple protocols like SPI and I2C just are not good enough. They aren't fast, the single-ended signal scheme makes them very sensitive to noise, and there is no error correction. These protocols make sense and work extremely well for their intended purpose: connecting ICs on a PCB. If you expose an unterminated port to the outside world, all bets are off.
These protocols and variations thereof are still in heavy use in modern PCs. But they're internal busses, as the protocols intend.
I haven't looked closely at the USB spec, but I imagine the main problem with bit-banging is simply the speed required. You have to have dedicated hardware because no microcontroller is fast enough to toggle the pins while also running the software stack to decode the protocol and manage error correction.
You can run into this exact problem bit-banging I2C. With a 20MHz CPU, the maximum clock speed you can get is about 250KHz. Just a bit more than half the typical maximum rate of 400KHz. You can absolutely forget about the 1MHz version.
PHYs exist for one very good reason: it is vastly cheaper to offload comms protocols to hardware. Without that, you have to over-spec your CPU by quite a lot to get enough resources to manually manage communication. This is why every modern microcontroller contains hardware for I2C, SPI, serial, etc.
In summary, the simple serial protocols like SPI and I2C and UART are just absolutely terrible choices for external peripherals. They can't operate at reasonable speeds, they can't tolerate long cables, they can't tolerate noise. The nature and design of these protocols (excepting RS232 which is not UART) means that they cannot be used this way. There's no change to the spec you could make to support this without reinventing USB.
USB is also tough to bitbang also it has pretty strict timing requirements. Compared to something like i2c where the clock only advances when the pin is explicitly toggled.
You may have intended to say SPI. I²C does support "clock stretching" to delay until ready, but that's only in one particular case; otherwise the I²C clock advances all the time at whatever your baud rate is, not only when a pin is explicitly toggled.
That depends on if you are the controller or the target, no? My usual use case for i2c is for talking to some peripheral from a microcontroller, where I am acting as the clock source. Clock stretching applies to the target side, at least when you are talking about SCL.
If there has to be a chip to facilitate coms, I feel like you could go on a similar hunt for 8 pin microcontrollers that could serve that function and maybe also provide some extra functionality. It would be interesting if it could connect to a PC though a DDC connection.
Yes, but the RPi is not yet quite a standard desktop/labtop workstation. And it would be nice if the RPi exposed its SPI, I2C, and UARTs over a standard mini connector like JST-SH connectors to ease plugging in peripherals (like is done with Stemma QT, Qwiic and Grove).
i mean... someone did try this with i2c. a couple of dead computer companies shipped a bus that i forget the name of, based on this concept. its descendant is the vga hdmi control channel spec (which was implemented as a de facto separate standard but is very similar)
ACCESS.bus by Philips (who developed I²C) and DEC, and the DEC variant SERIAL.bus (with different voltage levels) used by their keyboards and mice for a little while.
USB is something that is possible to understand, and apparently bit-bang at at least low speed 1.5Mbps. Probably full speed 12Mbps as well on a modern MCU. I don't understand it, but one can.
In that sense it's like SPI, or perhaps more like CAN or SD: when you don't understand it, you reach for someone else to have done it for you, but you can choose to understand it and once you understand it you can implement it.
If you're the slave you have tight timing requirements but you only have to respond with certain fixed bit patterns to certain other bit patterns. If you're the master, you can do more things concurrently because the slave won't notice a little jitter in how often you poll it, but you have the problem of dealing with a wider variety of slaves that can be connected.
Oh, USB 1.1 at the transport layer and lower is not that difficult.
But there is more complexity on higher layers. USB HID (mice and keyboards) is often the first you'd want but it is special in that it allows a device to describe its own packet format in a tokenised data description language.
The device only has to send an additional blob when asked, but the host has to parse the contents of that blob and use the result to parse the device's packets.
And of course, every time there is complexity in a protocol and there are multiple implementations of it, there is more opportunity for them to be incompatible in very subtle ways.
This phenomenon has caused for example that some gaming keyboards with N-key rollover that work perfectly on MS-Windows without any special drivers have been rejected outright by Apple or Linux hosts. (I hope these issues have been fixed now, but I'm not sure).
I thought there was also a mandatory fixed-layout "boot" profile for mice and keyboards? There was some controversy because vendors interpreted it as only being allowed to support the boot profile, resulting in most USB keyboards having 6-key rollover maximum.
Excellent article, thanks. At the risk of missing the forest for the trees, I wonder how much simpler things would have been if you had been slightly flexible on the 8-pin requirement. It seems as though having just a few more pins would have reduced the complexity of the project significantly while only marginally increasing the time it takes to solder.
Hand-solderable might be relative. Certainly, QFNs and BGAs are more difficult. But I don't think the average hobbyist can solder QFPs, especially with an exposed underside pad. Heck, as a hobbyist, I don't think I'd trust myself to solder SOICs.
It is very cool project, buy I think author (I know, you will read this) take it to the extreme. Again, it is very cool technically, but it contradicts declared goal: to make new computer kit for beginners.
IMHO, it is doesn't matter for novice, what to solder, SOIC8 or SOIC28. SOIC28 is as easy (or hard, if you want) as SOIC8.
And larger chip could make much more useful computer: it will be possible to add some minimal sound (as such chips typically have DAC), keyboard, and, maybe later, true monitor output in VGA style (not DP or HDMI of course).
It will be not much harder (if at all) to solder, but could be good base for expansion if owner gain interest in such things.
I find the instruction set distasteful. I don’t want to start a flame war. This is just my opinion, but it is a strong one.
It was designed late enough in history to have taken advantage of a lot of available information. None of it was taken advantage of. Which is why a lot of extensions are now being proposed to actually fix things that should’ve been done right in the first place. With all the additions, it is slowly approaching sanity, only 10 years later. And I don’t buy the excuses that the learning process needed to happen. All the information was available all along, and the mistakes were obvious to basically all of us all along
Some of the extensions are only Band-Aids for the real design issues. Eg shadd2 is a bandaid for not having proper addressing modes for accessing arrays. A common refrain to answer this is composed of promises of magical instruction fusion in the core. This is often promised, but never delivered. Certainly not in the cheap kind of processors that are the only target for RISC – V. Not having instructions for a bitfield extraction and insertion is also an amateur mistake. That’s why there are extensions to fix that one too. But it should’ve been obvious from the beginning that it would be necessary. A conditional branch based on a bit in a register is another obvious thing that should’ve been considered from the very beginning as it is commonly encountered. Any analysis of modern software would’ve shown this.
What annoys me is the information was available. We know what sorts of things modern software does. It was all ignored. Instead, we got a slightly updated mips-1. And now with all the extensions, it’s fragmentation galore. You can either target the final result (RV23, I think is the name), which is somewhat sensible, but no hardware implements, or you can target the least common denominator, which will run everywhere, shittily
There are other, more serious, design issues when it comes to attempting to use RISC – V for actual high-performance computing. I’ll save those for another rant.
At approximately the same point in history, another instruction set was designed. It actually took advantage of all the knowledge available about what modern software looks like, and it shows: aarch64.
On the flip side, my understanding is that AArch64 does not support anything like RISC-V's 16-bit compressed instructions, which are trivial to implement in hardware and allow RISC-V code density to match and even beat x86-64 for real-world binaries. I think RISC-V's design criteria make plenty of sense on their own, they're just not the ones you might prefer.
That was the other side of my rant that I alluded to and didn’t go into. The compressed instructions set is idiotic for big cores that need to go fast. It wastes 3/4 of the encoding space on nonsense that is of no use there. When you’re designing a big out of order core, slapping on a slightly bigger L1 is not a problem. Variable size instructions is one.
Compare : Apple M4 perf vs $your_favourite_rv_”fast”_core perf per MHz
It makes sense for micro controllers, but not for big cores.
This whole approach of trying to be everything for everyone is one of the reasons that RISC – V ends up being mediocre for everyone and perfect for no one.
> Compare : Apple M4 perf vs $your_favourite_rv_”fast”_core perf per MHz
We might get a meaningful comparison like that if Qualcomm starts making RISC-V based SoC's as a hedge against ARM. Or if Tenstorrent comes up with a M4-like CPU design. I think the jury is very much still out as to whether the rather limited variable-sized insns (2 or 4 bytes) of RISC-V + the compressed insn extension is a genuine concern. It's certainly nothing like the chaos you see with x86-64 (which seems to be a real bottleneck for very wide decode), and a lot closer to something like the old ARM32+Thumb2.
It’s been ten years. The jury has returned. Rendered judgement. Left to a picnic. Come back. And since retired to a carefree life of llama farming.
Your argument is wistful thinking. Not fact. “Well, if maybe someone does it” isn’t a fact. Maybe someone will make a 8501 that outperforms my M4. But I won’t believe it till it is done.
And indeed it is close to thumb2. Which was purposefully rejected for aarch64. By careful study. Given that between the two, aarch64 looks to be much better thought-through, I’ll be giving the credit for making the right decision here to that team too.
>You can either target the final result (RV23, I think is the name), which is somewhat sensible
RVA23 is the name. And I hear this is what e.g. Windows, Android and the next Ubuntu LTS target.
>or you can target the least common denominator, which will run everywhere, shittily
Not as shitty as you make it to be. And it has a huge advantage to aarch64, in its simplicity. Easily an order of magnitude, allows it to be used in scenarios aarch64 could only dream of.
>At approximately the same point in history, another instruction set was designed
I get it, you really like aarch64.
Which one weighted its options better? You might be right, but as years pass we'll have the benefit of hindsight. We'll be able to look back and see whether either side had good choices or cursed ones.
It's going to be fun. Hasn't been this fun since the 90s.
You can use a chip clip, but presumably you rejected that idea because your objective is for this to be replicable by the kind of people who don't know what a chip clip is.
I knew I wanted one, but I didn't know that it was called a "chip clip" and indeed "IC clip" and "IC test clip" seem to be a better match.
The clip will give you a electrical connection to the SPI flash but I'm not sure you'll be able to talk to it without jumpers on the board. Is it possible without jumpers?
I'd be very surprised if there isn't a filesystem driver for SPI flash. Linux obviously can speak SPI and SPI flash is extremely common in a lot of applications.
It looks like a passive mechanical adapter. I'd argue that a SOIC-8 programming clip thing would be better. This looks like you need to write a decent amount of code for a computer to talk to it as easily as a FAT-16 SD card.
You're right; it looks like SD cards implement a protocol on top of SPI. They don't just map the content to SPI addresses. So a flash chip won't look like an SD card to the OS
Right. Unsure if it's possible that any SD controller has a secret SPI mode. This seems like a ridiculously niche product. Which is fine but just something to keep in the toolbelt.
Why am I imagining this being used for really serverless iot infrastructure?
Like imagine that you wanted to deploy something on a lot of huge devices, well by using something like this open source and really limited but (with it just works), you can actually have pcb providers build it and ship it in their warehouse / wherever and just provide it energy and ethernet and now you can probably ssh into it / even create some sort of Vercel-like UI on top of it(Coolify/Dokploy? though we would need to slim down docker a lot for Dokploy? )
and when the work is done , they would actually scrape the metal /pcb and / reuse it again...
I am not sure if such metal /pcb recycling makes sense...
If anyone technical can respond to this, it would be great.
This can also be done with risc-v as well. I am not sure but I was thinking of creating a very dead simple company (my brain and its weird thoughts... , also Don't copy me or if you do, then hire me xD) which just can take a device like old phones and then just root them using AI? / manual or maybe not even root it? IDK...,
Basically then providing them internet access and energy (Not a traditional warehouse) because you actually only pay for the one time fees and afterwards all the fees that you pay, they are of the real costs bore by the company operating it / no middle man profits.
Kind of like a "Costco" (oh I had forgotten name of Costco and I had to search target alternatives xD) where they actually are there to help you save money but for you to use their services you gotta have a card.
I've dabbled in microcontrollers and enjoy how the limitations force me to find creative solutions, but this is truly next level. I'm not great with a soldering iron but I'm seriously considering assembling one of these.
Aside from the project itself being very cool, this page is a great source for information about small microcontrollers, even if it does omit the WLCSP ones. And it links to the MIPS emulator page for ARM at https://dmitry.gr/?r=05.Projects&proj=33.%20LinuxCard which seems very interesting.
> It does not help that modern operating systems require gigabytes of RAM, terabytes of storage, and always-on internet connectivity to properly spy on you
It is interesting how to go to the limits of low-end author needs to implement CPU emulator. It is not obvious, that you need ADD layers at this level, not REMOVE them!
I agree! While it is a fairly common approach -- eg the Apollo guidance computer interpreted the equations instead of directly executing them, or sweet16 by woz -- it is not obvious at first
This pleasantly reminds me of the little 6502 or 1802 Altoids-tin computers you can buy and assemble, but arguably more "useful" (though I get a lot of use out of a 6502 ;-).
Was there no native ARM linux you could have used? As I recall, you have used this emulated MIPs technique in many of your published projects, so it's good to prove that the hardware is working?
Or why not just go full native....grab some MIPS-core IP and make your own with an FPGA?
The main hang-up is that ARM uses PC as a general register a lot and in ways that make translating ARM instrs rather messy.
ADD R0, SP, PC, ROR SP
is entirely valid, even if nonsensical, instruction. But you must translate all valid inputs, else you risk breaking things. That may be a contrived example, but here is a common one: if one has a jumptable of relative offsets somewhere, pointed to by R10, even this is valid:
Interesting, I hadn't thought about this. Is the issue that the JIT output is likely to be a different number of bytes away? tbb/tbh seems like a more common version of that problem, TBH.
As I understand it, this kind of thing was a big problem for ARM in the mid-90s when they finally wrote the ARM ARM and outlawed things like ldmia r2!, {r0-r4}.
Different number of bytes out than in is not an issue. Efficiently translating such constructs in the general case is hard. Imagine what it would look like.
ENIG is gold coated not solid gold so it'll wear off and the metals under it will likely corrode. I think the majority of my USB connectors I eventually see wear marks on them.
> So, which pins could be combined with SDIO's three? After much thinking, the solution is obvious. RAM's nCS can be the SD card's CLK. RAM's CLK can be the SD card's CMD. RAM's MOSI can be the SD card's DAT. Try and figure out all the possible interactions with each device and what that would look like to the other, to convince yourself that it will work safely.
This is truly a brilliant hack, well worth publishing at Hacker News.
And worth a shirt "After much thinking, the solution is obvious."
Write down the problem. Think real hard. Write down the solution.
(The Feynman method—described by someone who observed him, though, not the man himself).
There was an old joke. University math class. Professor writes a huge formula on the board. Says, "and from this, it is obvious that," and writes another very different large formula on the board. He stands for a second, says "hm...", and walks out of the lecture hall. He returns 30 minutes later, throws a stack of papers freshly-covered in writing unto the desk, mutters "yeah, that indeed was obvious", and continues the lecture.
My dad actually did a form of this, at the encouragement of a teacher.
He was working out the proof to a problem in college and got stuck half way through. He had the problem and the answer, so he worked it from both ends and got stuck in the middle. He had to present the process in class, so he went to the professor for help.
His professor (who had done it originally) couldn't remember how anymore, and told him when he got to that point, he should say "and from this, it's obvious that..." and just jump to the next step.
That's exactly what he did, and no one in class (half hour into the class) even noticed.
"It is obvious that..." usually means "with large amounts of algebra but very little actual thinking" when properly used in a math textbook or lecture. It has its utility.
I'm always a bit saddened to see that a separate chip is the go-to method to interface with USB. Unfortunately USB is an incredibly-complex protocol that it seems anything beyond a basic V-USB running USB 1.1 at low-speed is generally not doable without specialized hardware and a significant software stack. Meanwhile a protocol like SPI is ridiculously simple...the minimum hardware needed is a shift register that can be clocked fast enough. I miss how desktop and labtops used to have an exposed serial and parallel port, which could communicate at this low level. I often wonder if instead of USB existing that we instead stuck with UART, I2C, or SPI multidrop (using a small set of standard clock rates) for simple peripherals (maybe over a single connector like the 4-pin JST SH cable for Stemma QT, Qwiic and Grove) over a short distance, and then jumped to IEEE 802.3 Ethernet links for data-heavy peripherals like monitors and external drives. Then instead of having to have separate support for USB and Ethernet, you just would support Ethernet links.
> Meanwhile a protocol like SPI is ridiculously simple
Yes, it is. It was intended to require as little silicon as possible to minimize the cost to the transistor budget. SPI doesn't contemplate power supply, hot-plug, discovery, bit errors, or any other of a host of affordances you get with USB.
I think there is some value for software developers to understanding SPI and the idioms used by hardware designers with SPI. Typically, SPI is used to fill registers of peripherals: the communication is not the sort of high level, asynchronous stuff you typically see with USB or Ethernet and all the layers of abstraction built upon them. Although there is no universal standard for SPI frames, they do follow idiomatic patterns, and this has proven sufficient for an uncountably vast number of applications.
I feel like you could support hot-plug, discovery, and bit errors with a protocol orders of magnitude simpler than USB, something you could bitbang on an ATTiny45. (And without Neywiny's list: 'bit ordering, half-duplex, the 4 modes, chip select to first clock setup times, bits per word, "strobing" nCS between words.' Those incompatibilities are mostly just people taking shortcuts.) And, unless you're talking about USB-C PD, which isn't involved here, the power-supply question isn't really related to the software complexity; it's just a question of making the power and ground traces on the USB-A plug (and the corresponding parts of other connectors) longer than the data traces so they make contact first.
You couldn't make it quite as simple as (a given flavor of) SPI, but something close to I²C should be feasible.
The main reason it's near impossible to bit-bang USB is that all devices to are required to use one of a few fixed clock rates (1.5 MHz, 12 MHz and 480 MHz), unlike SPI and I²C which allow variable/dynamic clock rates.
If you simply remove this restriction, bit-banging USB would become trivial, even with all the other protocol complexity.
Though, I think USB made the right call here. The requirement to support any clock speed the device requested would add a lot of complexity to both hosts and hubs.
Only supporting a few fixed clock-rates makes certification and inter-device compatibility so much easier, which is very important for an external protocol. Supporting bit-banging just isn't that important of a feature, especially when the fixed clock rates really are that hard to implement on dedicated silicon.
> something close to I²C should be feasible.
That'd be https://en.wikipedia.org/wiki/System_Management_Bus
That's just I²C with some features disabled. It doesn't add any of the things I²C is lacking.
Well, okay, I guess SMBus ARP kind of does. Thanks!
I never thought about making hot pluggable SMBus peripherals. It's an interesting idea. Many motherboards even have headers broken out, and of course operating systems already have drivers for many SMBus peripheral types. LoFi USB.
We have hot pluggable I2C at home. Every HDMI port (and pretty much every DP port with a passive adapter) has I2C on the DDC pins. The port also provides 5V 50mA so your MCU doesn't need external power. Example: https://www.reddit.com/r/raspberry_pi/comments/ws1ale/i2c_on...
Does HDMI use SMBus ARP?
Perhaps there is a place for a simpler alternative. My comment was pretty tangential to this discussion about the merits of SPI vs USB vs whatever. My point is that I believe some benefit can be had by software developers in understanding how components can be integrated together using a primitive as simple minded as SPI. I used the qualification "some" again, as well. I don't offer any revolutionary insights, but if you survey how SPI is used in practice, you'll learn some things of value, even if you never use SPI yourself.
And for a computer like the one here, even having hot-plug is an unnecessary luxury.
I think there's something like a dual of the uncanny valley when it comes to protocol complexity vs adoption. Really simple like UART, I2C, or SPI, and engineers will adopt it on their own. But once you start wanting to add some higher level features, engineers would just as soon reinvent their own proprietary schemes for their own specific needs (the "valley"), and the network effects go away. So to create a more popular protocol you end up with a design committee where everyone piles in their own bespoke feature requests and the thing ends up being an ugly complex standard that nobody really likes. But at least it gives it a shot at wider adoption to prime the pump of network effects. (Or maybe it's more akin to how Java beats the Lisp Curse by using the social effect of having to raise an army to do anything?)
Isn't that basically what USB is? At least if you stick to USB 1. Obviously, since that time, it's expanded to cover a wider range of capabilities. It's a half-duplex serial line, just like I2C. Unlike I2C, it's asynchronous, like a UART.
No, even USB 1 is a ridiculously complex networking protocol stack running on top of that half-duplex serial line.
Very true. And dealing with bit ordering, half-duplex, the 4 modes, chip select to first clock setup times, bits per word, "strobing" nCS between words, the list goes on. But when you see "USB 1.1 device" you know a large majority of what it can support and what it'll do.
These (and hotswapping/handshaking) are small quibbles that could be addressed with a simple standard subset of SPI for human-plugable devices.
While true it does seem like the flexibility allows for good optimizations. For example, a lot of devices don't need addresses or have memory maps etc. So, sadly while it's a pain to deal with, it does make things very fast and efficient.
Since when does USB have error correction/detection?
USB bulk packets have a 16 bit CRC.
Interesting. I recently tested a 6ft USB3 cable and an attached drive. The transfer of a 1TB file failed a few times (not sure of the details). This is strange, since the cable couldn't have been that bad (?) and the 16-bit CRC should have caught those errors (assuming an error will trigger a resend of data). Any ideas what the issue could have been? Does Linux provide a way to view the error rate?
> the 16-bit CRC should have caught those errors
I believe your confidence in the 16-bit CRC is excessive. There is a 1 in 65536 chance of a 16 bit CRC failing for certain types of corruption in 512 byte bulk USB packets, and there are about 2 billion packets in a 1TB transfer. If the BER is high, corruption of the transfer is not surprising.
A 6ft cable should be fine, assuming it is well designed, manufactured correctly, in good condition, and not in close proximity to high noise sources, such as SMPS. If any of those factors are compromised the BER will increase, and you will then be testing the rather limited capabilities of 16 bit CRC.
USB4 has 32 bit CRC for data payloads for a reason. In the mean time, the #1 thing you can do is use short, high quality cables.
https://wiki.wireshark.org/CaptureSetup/USB says "Software USB capture captures URBs (USB Request Blocks) rather than raw USB packets." So Wireshark couldn't give you the CRC, which would be stripped away.
You could hypothetically use a really-really high bandwidth oscilloscope (like 2 GHz to view 480 MHz USB HS signals), but those are expensive. So you would have to resort to using external USB sniffer...out of curiosity I found someone made a sniffer that is basically a USB-capable microcontroller plus an FPGA and a USB PHY: https://hackaday.com/2023/06/13/cheap-usb-sniffer-has-wiresh...
I work in the AV industry. RS-232 is still the king for control signals between devices, even on brand new hardware that costs >10K USD. TV screens for signage/conference rooms often have RS-232 for more versatile control than HDMI-CEC. Higher bitrate than 9600 BPS is often not needed. The most common connector consists of three-pin screw terminals (Tx, Rx, GND), although these days most installations have at least one RS232-to-USB adaptor somewhere. And for larger rooms, RS232 is bridged over Ethernet.
This was a bit of a surprise when I started, but then I realised that many installations are decades old, with components having been replaced individually.
The article goes through a long list of 8 pin chips but ignores the very popular $0.10 CH32V003, which has 2k RAM and 16k Flash running at 48 MHz and 1 CPI -- or the new CH570 (I have a dev board on the way) which is also $0.10 in SOIC8 but now runs at 100 MHz with 16k RAM and 256k flash and has USB and a 2.4 GHz packet radio.
CH32V003 is not available on mouser.com or digikey.com
Googling for "CH570" produces results about tractors. Got a link?
EDIT: found info here: https://www.cnx-software.com/2025/04/02/10-cents-wch-ch570-c...
8-pin part lacks USB AND only has 3 I/O pins. It would be disqualified due to being too I/O-poor. Wasting 5 pins out of 8 is a joke!
As for the old one, CH32V003: 48MHz is slower than the STM's 150MHz, half the flash, 1/4 the RAM. It is still not the best option.
I did update the article with them, though :)
> 8-pin part lacks USB AND only has 3 I/O pins
But you get radio (BLE in the CH572 version), which means you don't need USB.
My comment was not that you didn't choose them but that you didn't consider them.
You do NOT get BLE in the 8-pin part
I just considered them and added them to my writeup :)
You get the radio in the 8 pin part. That's the "ANT" connection, pin 8, one of the three "wasted" pins (along with the crystal) you complain about.
Yup. All of which would be useless for this project. :)
Damn, only one seller on Aliexpress right now and no dev boards. Where’d you find yours?
The official WCH store on Aliexpress. Stock is coming in slowly 10 at a time and then selling out in 30 minutes or so, but it is coming in.
https://www.aliexpress.com/item/1005008743123631.html
2.4Ghz. I was really wondering about the 24Ghz.
oops .. my brain of course knows but my fingers didn't in this instance. Fixed.
I think it's compatible with the old nRF24 chips -- I'll test when mine arrives in a week or so. The CH572 version has BLE5 ... I think the same hardware but including a software stack.
There are plenty of MCUs that will work as a USB device, they were just ruled out by the package restriction.
Well yeah, nowadays high-end micro-controllers may have an integrated USB HS PHY (notably STM32F7's and the MIMXRT1060 used in Teensy 4, and many others), but the basic cheap attiny-like or ice40-like hasn't and most usually require going through an external PHY. I've been wanting to get into using the CH32V305 cause it is in a hand-solder-friendly TSSOP-20 package and has integrated USB HS PHY but I hear it doesn't have a software support and I don't see it on microchip/digikey/etc. Though we may soon have easy access to 20-cent microcontrollers with USB HS, but still the protocol feels incredibly complex and way overkill for simply interfacing a peripheral to a computer.
I still install brand new computers with serial ports! Dell sells us OptiPlex towers and we occasionally order them with a serial card to connect to legacy scientific instruments.
I bought a Lenovo mini thinkstation with a serial port because I thought it would be cool. But I don't even know what cool stuff 8 can plug there, except for a serial console
So you want to replace a USB PHY with a serial to Ethernet converter and an Ethernet PHY.
The reality is that the simple protocols like SPI and I2C just are not good enough. They aren't fast, the single-ended signal scheme makes them very sensitive to noise, and there is no error correction. These protocols make sense and work extremely well for their intended purpose: connecting ICs on a PCB. If you expose an unterminated port to the outside world, all bets are off.
These protocols and variations thereof are still in heavy use in modern PCs. But they're internal busses, as the protocols intend.
I haven't looked closely at the USB spec, but I imagine the main problem with bit-banging is simply the speed required. You have to have dedicated hardware because no microcontroller is fast enough to toggle the pins while also running the software stack to decode the protocol and manage error correction.
You can run into this exact problem bit-banging I2C. With a 20MHz CPU, the maximum clock speed you can get is about 250KHz. Just a bit more than half the typical maximum rate of 400KHz. You can absolutely forget about the 1MHz version.
PHYs exist for one very good reason: it is vastly cheaper to offload comms protocols to hardware. Without that, you have to over-spec your CPU by quite a lot to get enough resources to manually manage communication. This is why every modern microcontroller contains hardware for I2C, SPI, serial, etc.
In summary, the simple serial protocols like SPI and I2C and UART are just absolutely terrible choices for external peripherals. They can't operate at reasonable speeds, they can't tolerate long cables, they can't tolerate noise. The nature and design of these protocols (excepting RS232 which is not UART) means that they cannot be used this way. There's no change to the spec you could make to support this without reinventing USB.
UART over LVDS is still quite simple and works well for long cables and it tolerates ground differences and noise well.
Yeah, and LVDS is something that really cheap ice40 FPGAs support with not much of an area cost.
(In my original comment I should have said to use differential signaling for going off-board.)
USB is also tough to bitbang also it has pretty strict timing requirements. Compared to something like i2c where the clock only advances when the pin is explicitly toggled.
You may have intended to say SPI. I²C does support "clock stretching" to delay until ready, but that's only in one particular case; otherwise the I²C clock advances all the time at whatever your baud rate is, not only when a pin is explicitly toggled.
That depends on if you are the controller or the target, no? My usual use case for i2c is for talking to some peripheral from a microcontroller, where I am acting as the clock source. Clock stretching applies to the target side, at least when you are talking about SCL.
Hmm, on thinking about it further, I guess I was pretty comprehensively wrong. Thank you.
If there has to be a chip to facilitate coms, I feel like you could go on a similar hunt for 8 pin microcontrollers that could serve that function and maybe also provide some extra functionality. It would be interesting if it could connect to a PC though a DDC connection.
I mean technically you have all of these interfaces on a raspberry pi.
Yes, but the RPi is not yet quite a standard desktop/labtop workstation. And it would be nice if the RPi exposed its SPI, I2C, and UARTs over a standard mini connector like JST-SH connectors to ease plugging in peripherals (like is done with Stemma QT, Qwiic and Grove).
Tad more than 8 pins on a pi
i mean... someone did try this with i2c. a couple of dead computer companies shipped a bus that i forget the name of, based on this concept. its descendant is the vga hdmi control channel spec (which was implemented as a de facto separate standard but is very similar)
the name is escaping me
ACCESS.bus by Philips (who developed I²C) and DEC, and the DEC variant SERIAL.bus (with different voltage levels) used by their keyboards and mice for a little while.
And a variant of ACCESS.bus lives on as the extremely widely adopted DDC that is a part of HDMI, DVI, and VGA.
USB is something that is possible to understand, and apparently bit-bang at at least low speed 1.5Mbps. Probably full speed 12Mbps as well on a modern MCU. I don't understand it, but one can.
In that sense it's like SPI, or perhaps more like CAN or SD: when you don't understand it, you reach for someone else to have done it for you, but you can choose to understand it and once you understand it you can implement it.
If you're the slave you have tight timing requirements but you only have to respond with certain fixed bit patterns to certain other bit patterns. If you're the master, you can do more things concurrently because the slave won't notice a little jitter in how often you poll it, but you have the problem of dealing with a wider variety of slaves that can be connected.
Oh, USB 1.1 at the transport layer and lower is not that difficult.
But there is more complexity on higher layers. USB HID (mice and keyboards) is often the first you'd want but it is special in that it allows a device to describe its own packet format in a tokenised data description language. The device only has to send an additional blob when asked, but the host has to parse the contents of that blob and use the result to parse the device's packets.
And of course, every time there is complexity in a protocol and there are multiple implementations of it, there is more opportunity for them to be incompatible in very subtle ways. This phenomenon has caused for example that some gaming keyboards with N-key rollover that work perfectly on MS-Windows without any special drivers have been rejected outright by Apple or Linux hosts. (I hope these issues have been fixed now, but I'm not sure).
I thought there was also a mandatory fixed-layout "boot" profile for mice and keyboards? There was some controversy because vendors interpreted it as only being allowed to support the boot profile, resulting in most USB keyboards having 6-key rollover maximum.
Thanks, this was very insightful and entertaining once again!
One thing: you might want to mention the required board thickness (0.8mm, iirc?) for people planning to have their own boards made.
Edit, explanation for others: that is required to make the "USB-C edge connector" fit the plug.
Excellent article, thanks. At the risk of missing the forest for the trees, I wonder how much simpler things would have been if you had been slightly flexible on the 8-pin requirement. It seems as though having just a few more pins would have reduced the complexity of the project significantly while only marginally increasing the time it takes to solder.
It would be no challenge at all then. And no fun. There are plenty much faster chips with USB built-in
Hell, allwinner v3s is hand solderable and has built in RAM and will happily boot Linux natively
Rp2350 would also be an excellent choice. It has a very good QSPI ram interface with cache built-in and usb support.
Hand-solderable might be relative. Certainly, QFNs and BGAs are more difficult. But I don't think the average hobbyist can solder QFPs, especially with an exposed underside pad. Heck, as a hobbyist, I don't think I'd trust myself to solder SOICs.
Soic8 is doable as your first soldering project. I tried this out on a few people successfully.
It's almost 2 chips. One is just a USB-serial IC! But you didn't count the SD card, so you're up to 3 again.
Total pin count is so low on this, I'm very tempted to make a dead bug version.
The SD card itself contains a fairly powerful processor, likely a 32-bit ARM. It would be a fun hack to do a similar trick with that.
A very fast 8051 is also just as if not more likely to be in the SD card:
https://www.bunniestudios.com/blog/2013/on-hacking-microsd-c...
there's this thing! https://hackaday.com/2016/06/30/transcend-wifi-sd-card-is-a-...
RISC-V in Sandisk.
i have an eye-fi SD card that has wifi built in. So you don't even need the serial port!
Been there. Done that. Got the t-shirt: https://dmitry.gr/?r=05.Projects&proj=15.%20Transcend%20WiFi...
real nice work! I didn't think a mere upvote conveyed how cool it was to see this, and have a response nearly immediately.
I didn’t make a dead bug version so you’d be first :). Microsd-to-sd adapters make good solderable microsd holders.
USB2Serial IC could go to the cable made by others and "doesn't count" as µSD doesn't count.
It is very cool project, buy I think author (I know, you will read this) take it to the extreme. Again, it is very cool technically, but it contradicts declared goal: to make new computer kit for beginners.
IMHO, it is doesn't matter for novice, what to solder, SOIC8 or SOIC28. SOIC28 is as easy (or hard, if you want) as SOIC8.
And larger chip could make much more useful computer: it will be possible to add some minimal sound (as such chips typically have DAC), keyboard, and, maybe later, true monitor output in VGA style (not DP or HDMI of course).
It will be not much harder (if at all) to solder, but could be good base for expansion if owner gain interest in such things.
Yes. And you’re free to use my code to do that. I wanted the fun artificial limitations 8 pins
I have the perverse urge to forego even the board, and just make this a circuit sculpture.
silkscreen "555" onto at least one of the ICs, if you do this
Do it!
I am not an artist or a sculptor so I did not dare try
> but I am allergic to RISC-V for personal reasons.
Do you mind elaborating?
I find the instruction set distasteful. I don’t want to start a flame war. This is just my opinion, but it is a strong one.
It was designed late enough in history to have taken advantage of a lot of available information. None of it was taken advantage of. Which is why a lot of extensions are now being proposed to actually fix things that should’ve been done right in the first place. With all the additions, it is slowly approaching sanity, only 10 years later. And I don’t buy the excuses that the learning process needed to happen. All the information was available all along, and the mistakes were obvious to basically all of us all along
Some of the extensions are only Band-Aids for the real design issues. Eg shadd2 is a bandaid for not having proper addressing modes for accessing arrays. A common refrain to answer this is composed of promises of magical instruction fusion in the core. This is often promised, but never delivered. Certainly not in the cheap kind of processors that are the only target for RISC – V. Not having instructions for a bitfield extraction and insertion is also an amateur mistake. That’s why there are extensions to fix that one too. But it should’ve been obvious from the beginning that it would be necessary. A conditional branch based on a bit in a register is another obvious thing that should’ve been considered from the very beginning as it is commonly encountered. Any analysis of modern software would’ve shown this.
What annoys me is the information was available. We know what sorts of things modern software does. It was all ignored. Instead, we got a slightly updated mips-1. And now with all the extensions, it’s fragmentation galore. You can either target the final result (RV23, I think is the name), which is somewhat sensible, but no hardware implements, or you can target the least common denominator, which will run everywhere, shittily
There are other, more serious, design issues when it comes to attempting to use RISC – V for actual high-performance computing. I’ll save those for another rant.
At approximately the same point in history, another instruction set was designed. It actually took advantage of all the knowledge available about what modern software looks like, and it shows: aarch64.
If I may ask, what's your absolute favorite ISA? Is it ARM64 or something less common/practical?
For “big and fast”, aarch64. For “medium and small”, armv8M, for “very small”: AVR
On the flip side, my understanding is that AArch64 does not support anything like RISC-V's 16-bit compressed instructions, which are trivial to implement in hardware and allow RISC-V code density to match and even beat x86-64 for real-world binaries. I think RISC-V's design criteria make plenty of sense on their own, they're just not the ones you might prefer.
That was the other side of my rant that I alluded to and didn’t go into. The compressed instructions set is idiotic for big cores that need to go fast. It wastes 3/4 of the encoding space on nonsense that is of no use there. When you’re designing a big out of order core, slapping on a slightly bigger L1 is not a problem. Variable size instructions is one.
Compare : Apple M4 perf vs $your_favourite_rv_”fast”_core perf per MHz
It makes sense for micro controllers, but not for big cores.
This whole approach of trying to be everything for everyone is one of the reasons that RISC – V ends up being mediocre for everyone and perfect for no one.
> Compare : Apple M4 perf vs $your_favourite_rv_”fast”_core perf per MHz
We might get a meaningful comparison like that if Qualcomm starts making RISC-V based SoC's as a hedge against ARM. Or if Tenstorrent comes up with a M4-like CPU design. I think the jury is very much still out as to whether the rather limited variable-sized insns (2 or 4 bytes) of RISC-V + the compressed insn extension is a genuine concern. It's certainly nothing like the chaos you see with x86-64 (which seems to be a real bottleneck for very wide decode), and a lot closer to something like the old ARM32+Thumb2.
It’s been ten years. The jury has returned. Rendered judgement. Left to a picnic. Come back. And since retired to a carefree life of llama farming.
Your argument is wistful thinking. Not fact. “Well, if maybe someone does it” isn’t a fact. Maybe someone will make a 8501 that outperforms my M4. But I won’t believe it till it is done.
And indeed it is close to thumb2. Which was purposefully rejected for aarch64. By careful study. Given that between the two, aarch64 looks to be much better thought-through, I’ll be giving the credit for making the right decision here to that team too.
>You can either target the final result (RV23, I think is the name), which is somewhat sensible
RVA23 is the name. And I hear this is what e.g. Windows, Android and the next Ubuntu LTS target.
>or you can target the least common denominator, which will run everywhere, shittily
Not as shitty as you make it to be. And it has a huge advantage to aarch64, in its simplicity. Easily an order of magnitude, allows it to be used in scenarios aarch64 could only dream of.
>At approximately the same point in history, another instruction set was designed
I get it, you really like aarch64.
Which one weighted its options better? You might be right, but as years pass we'll have the benefit of hindsight. We'll be able to look back and see whether either side had good choices or cursed ones.
It's going to be fun. Hasn't been this fun since the 90s.
For big cores that need to go fast, it is not even close of a comparison. Aarch64 is tuned to that
For small cheap stuff, sure, RV is ok. But so was AVR/ARMv6M/ARMv7M.
I think it would be cute to also use 8 pin SPI flash chip instead of SD card for storage.
Looked into it. But then the “getting files in and out” story gets hard.
You can use a chip clip, but presumably you rejected that idea because your objective is for this to be replicable by the kind of people who don't know what a chip clip is.
I knew I wanted one, but I didn't know that it was called a "chip clip" and indeed "IC clip" and "IC test clip" seem to be a better match.
The clip will give you a electrical connection to the SPI flash but I'm not sure you'll be able to talk to it without jumpers on the board. Is it possible without jumpers?
Normally.
I'd be very surprised if there isn't a filesystem driver for SPI flash. Linux obviously can speak SPI and SPI flash is extremely common in a lot of applications.
I mean once assembled. In my design you can remove card, put files in, boot again, use the files.
Theoretically you can use one of these: https://www.tindie.com/products/bobricius/micro-sd-card-to-s...
I haven't tried it so I don't know how well it works
It looks like a passive mechanical adapter. I'd argue that a SOIC-8 programming clip thing would be better. This looks like you need to write a decent amount of code for a computer to talk to it as easily as a FAT-16 SD card.
You're right; it looks like SD cards implement a protocol on top of SPI. They don't just map the content to SPI addresses. So a flash chip won't look like an SD card to the OS
Right. Unsure if it's possible that any SD controller has a secret SPI mode. This seems like a ridiculously niche product. Which is fine but just something to keep in the toolbelt.
Why am I imagining this being used for really serverless iot infrastructure?
Like imagine that you wanted to deploy something on a lot of huge devices, well by using something like this open source and really limited but (with it just works), you can actually have pcb providers build it and ship it in their warehouse / wherever and just provide it energy and ethernet and now you can probably ssh into it / even create some sort of Vercel-like UI on top of it(Coolify/Dokploy? though we would need to slim down docker a lot for Dokploy? )
and when the work is done , they would actually scrape the metal /pcb and / reuse it again...
I am not sure if such metal /pcb recycling makes sense...
If anyone technical can respond to this, it would be great.
This can also be done with risc-v as well. I am not sure but I was thinking of creating a very dead simple company (my brain and its weird thoughts... , also Don't copy me or if you do, then hire me xD) which just can take a device like old phones and then just root them using AI? / manual or maybe not even root it? IDK...,
Basically then providing them internet access and energy (Not a traditional warehouse) because you actually only pay for the one time fees and afterwards all the fees that you pay, they are of the real costs bore by the company operating it / no middle man profits.
Kind of like a "Costco" (oh I had forgotten name of Costco and I had to search target alternatives xD) where they actually are there to help you save money but for you to use their services you gotta have a card.
I've dabbled in microcontrollers and enjoy how the limitations force me to find creative solutions, but this is truly next level. I'm not great with a soldering iron but I'm seriously considering assembling one of these.
Aside from the project itself being very cool, this page is a great source for information about small microcontrollers, even if it does omit the WLCSP ones. And it links to the MIPS emulator page for ARM at https://dmitry.gr/?r=05.Projects&proj=33.%20LinuxCard which seems very interesting.
> It does not help that modern operating systems require gigabytes of RAM, terabytes of storage, and always-on internet connectivity to properly spy on you
Love it.
It is interesting how to go to the limits of low-end author needs to implement CPU emulator. It is not obvious, that you need ADD layers at this level, not REMOVE them!
I agree! While it is a fairly common approach -- eg the Apollo guidance computer interpreted the equations instead of directly executing them, or sweet16 by woz -- it is not obvious at first
This pleasantly reminds me of the little 6502 or 1802 Altoids-tin computers you can buy and assemble, but arguably more "useful" (though I get a lot of use out of a 6502 ;-).
What do you do with your 6502?
super fun read thanks. cool result too :)! also love all the different options for hw discussed in lot of details. thx!
I knew it was gonna be dimitry from reading the title
Not sure if compliment or insult :)
Definitely compliment! At least, if it was my comment with exactly same text
Like somebody said in response to a paper which was anonymously published by Newton: "I recognize the Lion from his claw!"
lol I’d assume a compliment!
Under parts selection.. Even considering the PIC 16F. Why.
Every 8 pin MCU was given a shot
Was there no native ARM linux you could have used? As I recall, you have used this emulated MIPs technique in many of your published projects, so it's good to prove that the hardware is working?
Or why not just go full native....grab some MIPS-core IP and make your own with an FPGA?
Because no FPGAs come in 8-pin packages
And no Linux runs on cortex-m0 with ram attached over SPI.
And MIPS is the easiest Linux-compat architecture to emulate.
Surely emulating ARM on ARM would be faster / easier to JIT? Or at least it seems that way
Actually, no, mips is an easier JIT target than arm as well. Source: I've written both ARM-to-thumb1 and MIPS-to-thumb1 JITs
Fascinating!
The main hang-up is that ARM uses PC as a general register a lot and in ways that make translating ARM instrs rather messy.
is entirely valid, even if nonsensical, instruction. But you must translate all valid inputs, else you risk breaking things. That may be a contrived example, but here is a common one: if one has a jumptable of relative offsets somewhere, pointed to by R10, even this is valid: That gets messy to translateInteresting, I hadn't thought about this. Is the issue that the JIT output is likely to be a different number of bytes away? tbb/tbh seems like a more common version of that problem, TBH.
As I understand it, this kind of thing was a big problem for ARM in the mid-90s when they finally wrote the ARM ARM and outlawed things like ldmia r2!, {r0-r4}.
Different number of bytes out than in is not an issue. Efficiently translating such constructs in the general case is hard. Imagine what it would look like.
I have some ideas, but I guess I should try writing a JITting emulator instead of asking you to debug them :)
Try it - it is a fun project.
Thanks for the encouragement! I admit to being a bit intimidated by it.
> There was a time when one could order a kit and assemble a computer at home
I think I get what OP means but you can definitely order a pc kit and just assemble it nowadays
Not in the same way. Not with a soldering iron and schematics.
I recognize there is something hard describe about how "it's not the same", but:
https://eater.net/shop
Yes, but sadly that cannot run modern Linux
Please tell me you didn't use ENIG for the USB connector. It would be sacrilege.
I am just a lowly software guy who pretends to know how to use EAGLE. To me “ENIG” just means “5% more expensive boards”. What did I miss?
ENIG is gold coated not solid gold so it'll wear off and the metals under it will likely corrode. I think the majority of my USB connectors I eventually see wear marks on them.
I see. These contacts are just lead-free-solder coated, and thus is easy to re-apply with a soldering iron if they start wearing.
[dead]