I was feeling a bit like the Petunia and thought "Oh no, not again." :-) One of the annoyances of embedded programming can be having the wheel re-invented a zillion times. I was pleased to see that the author was just describing good software architecture that creates portable code on top of an environment specific library.
For doing 'bare metal' embedded work in C you need the crt0 which is the weirdly named C startup code that satisfies the assumption the C compiler made when it compiled your code. And a set of primitives to do what the i/o drivers of an operating system would have been doing for you. And voila, your C program runs on 'bare metal.'
Another good topic associated with this is setting up hooks to make STDIN and STDOUT work for your particular setup, so that when you type printf() it just automagically works.
This will also then introduce you to the concept of a basic input/output system or BIOS which exports those primitives. Then you can take that code in flash/eprom and load a binary compilation into memory and start it and now you've got a monitor or a primitive one application at a time OS like CP/M or DOS.
Its a fun road for students who really want to understand computer systems to go down.
It is a small kernel, from only a bootloader to running elf files.
It has like 10 syscalls if I remember correctly.
It is very fun, and really makes you understand the ton of legacy support still in modern x86_64 CPUs and what the os underneath is doing with privilege levels and task switching.
I even implemented a small rom for it that has an interactive ocarina from Ocarina of Time.
This is really neat. So many engineers come out of school without ever having had this sort of 'start to finish' level of hands on experience. If you ever want to do systems or systems analysis this kind of thing will really, really help.
No BIOS necessary when we're talking about bare metal systems. printf() will just resolve to a low-level UART-based routine that writes to a FIFO to be played out to the UART when it's not busy. Hell, I've seen systems that forego the FIFO and just write to the UART blocking while writing.
I hope nobody was confused into thinking I thought a BIOS was required, I was pointing out the evolution from this to a monitor. I've written some code[1] that runs on the STM32 series that uses the newlib printf(). I created the UART code [2] that is interrupt driven[3] which gives you the fun feature that you can hit ^C and have it reset the program. (useful when your code goes into an expected place :-)).
That's awesome. Back in the day this was the strong point of eCOS which was a bare metal "platform" for running essentially one application on x86 hardware. The x86 ecosystem has gotten so complicated that being able to do this can get you better performance for an "embedded" app than running on top of Linux or another embedded OS. That translates into your appliance type device using lower cost chips which is a win. When I was playing around with eCos a lot of the digital signage market was using it.
With AMD64 style chips? Probably not. Multi-core systems really need a scheduler to get the most out of them so perhaps there are some very specific applications where that would be a win but I cannot think of anything that isn't super specific. For ARM64 chips with a small number of cores, sure that is still a very viable too for appliance type (application specific) applications.
Newlib is huge and complex (even including old K&R syntax) and adapting the build process to a new system is not trivial. I spent a lot of time with it when I re-targeted chibicc and cparser to EiGen, and finally switched to PDCLib for libc and a part of uClibc for libm; see https://github.com/rochus-keller/EiGen/tree/master/ecc/lib. The result is platform independent besides esentially one file.
For a static library it does not matter whether it is huge and complex, because you will typically link into your embedded application only a small number of functions from it.
I have used a part of newlib with many different kinds of microcontrollers and its build process has always been essentially the same as a quarter of century ago, so that the script that I have written the first time, before 2000, has always worked without problems, regardless of the target CPU.
The only tricky part that I had to figure the first time was how to split the compilation of the gcc cross-compiler into a part that is built before newlib and a part that is built after newlib.
However that is not specific to newlib, but is the method that must be used when compiling a cross-gcc with any standard C library and it has been simplified over the years, so that now there is little more to it than choosing the appropriate make targets when executing the make commands.
I have never needed to change the build process of newlib for a new system, I had just needed to replace a few functions, for things like I/O peripherals or memory allocation. However, I have never used much of newlib, mostly only stdio and memory/string functions.
In school we were taught that the OS does the printf. I think the professors were just trying to generalize to not go on tangents. But, once I learned that no embedded libc variants had printf just no output path, it got a lot easier to figure out how to get it working. I wish I knew about SWO and the magic of semihosting back then. I don't think those would be hard to explain and interestingly it's one of the few things students asked about that in the field I'm also asked how to do by coworkers (the setting up _write).
220k just to include studio? That's insane. I have 12k and still do IO. Just without the overblown stdio and sbrk, uart_puts is enough. And only in DEBUG mode.
I thought this was going to talk about how printf is implemented. I worked with a tiny embedded processor that had 8k imem, and printf is about 100k alone. Crazy. Switched to a more basic implementation that was around 2k, and ran much,much faster. It seems printf is pretty bloated, though I guess typical people don't care.
In most C standard libraries intended for embedded applications, including newlib, there is some configuration option to provide a printf that does not support any of the floating-point format specifiers.
That is normally enough to reduce the footprint of printf by more than an order of magnitude, making it compatible with small microcontrollers.
Has anybody played with newlib, but grown the complexity as the system came together?
It seems like one thing to get a bare-bones printf() working to get you started on a bit of hardware, but as the complexity of the system grows you might want to move on from (say) pushing characters out of a serial interface onto pushing them onto a bitmapped display.
Does newlib allow you to put different hooks in there as the complexity of the system increases?
Newlib provides both a standard printf, which is necessarily big, and a printf that does not support any of the floating-point format specifiers.
The latter is small enough so that I have used it in the past with various small microcontrollers, from ancient types based on PowerPC or ARM7TDMI to more recent MCUs with Cortex-M0+.
You just need to make the right configuration choice.
While "newlib" is an interesting idea - the approach taken here is, in many cases, the wrong one.
You see, actually, the printf() family of functions don't actually require _any_ metal, bare or otherwise, beyond the ability to print individual characters.
For this reason, a popular approach for the case of not having a full-fledged standard library is to have a fully cross-platform implementation of the family which "exposes" a symbol dependency on a character printing function, e.g.:
void putchar_(char c);
and variants of the printf functions which take the character-printing function as a runtime parameter:
int fctprintf(void (*out)(char c, void* extra_arg), void* extra_arg, const char* format, ...);
int vfctprintf(void (*out)(char c, void* extra_arg), void* extra_arg, const char* format, va_list arg);
this is the approach taken in the standalone printf implementation I maintain, originally by Marco Paland:
honestly love reading about this stuff - always makes me realize how much gets glossed over in school. you think modern cpus and all the abstraction layers help or just make things messier for folks trying to learn the real basics?
I am coding RISC-V assembly (which I run on x86_64 with a mini-interpreter) but I am careful to avoid the usage of the pseudo-instructions and the registers aliases (no compressed instruction ofc). I have a little tool to generate constant loading code, one-liner (semi-colon separated instructions).
And as a pre-processor I use a simple C preprocessor (I don't want to tie the code to the pre-processor of a specific assembler): I did that for x86_64 assembly, and I could assemble with gas, nasm and fasmng(fasm2) transparently.
I was very confused by the title, expected someone writing their own printf — i.e. the part that parses the format string, grabs varargs, converts numbers, lines up strings, etc.
I'd have called it "Bare metal puts()" or "Bare metal write()" or something along those lines instead.
(FWIW, FreeBSD's printf() is quite easy to pluck out of its surrounding libc infrastructure and adapt/customize.)
FreeBSD’s printf is my goto, too! It’s indeed enormously simple to pluck out, instantly gives you full-featured printf, and has added features such as dumping memory as hex.
Funnily enough we're not even referring to the same one, the hexdump thing is in FreeBSD's kernel printf, I was looking at the userspace one :). Haven't looked at the kernel one myself but nice to hear it's also well-engineered.
(The problem with '%D' hexdumps is that it breaks compiler format checking… and also 'D' is a length modifier for _Decimal64 starting in ISO C23… that's why our hexdump is hooked in as '%.*pHX' instead [which still gives a warning because %p is not supposed to have a precision, but at least it's not entirely broken.])
Well, I know it's pretty easy in Debian. (It's not completely pain-free if you need unpackaged third-party libraries and/or if you are cross-compiling from one uncommon architecture to another.)
> What's the state of the art for cross-compiling in $CURRENTYEAR?
Poopy garbage dog poop.
glibc is a dumpster fire of bad design. If you want to cross-compile for an arbitrarily old version of glibc then... good luck. It can be done. But it's nightmare fuel.
Well there's two flavors to this. Building glibc and building a program that links against glibc. They're not entirely the same. And you'd think the latter is easier. But you'd be wrong!
It should be trivial to compile glibc with an arbitrary build system for any target Linux platform from any OS. For example if I'm on Windows and I want to build a program that targets glibc 2.23 for Ubuntu on x86_64 target that should be easy peasy. It is not.
glibc should have ONE set of .c and .h files for the entire universe. There should be a small number of #define macros that users need to specify to build whatever weird ass flavor they need. These macros should be plainly defined in a single header file that anyone can look at.
But glibc is a pile of garbage and has generated files for every damn permutation in the universe. This is not necessary. It's a function of bad design. Code should NEVER EVER EVER have a ./configure step that generates files for the local platform. EVER.
You're getting downvoted but you're not wrong. Well, it's not on purpose per se. It's just really really bad design from the 80s when we didn't know better. And unfortunately its design hasn't been fixed. So we have to deal with almost 40 years of accumulated garbage.
I was feeling a bit like the Petunia and thought "Oh no, not again." :-) One of the annoyances of embedded programming can be having the wheel re-invented a zillion times. I was pleased to see that the author was just describing good software architecture that creates portable code on top of an environment specific library.
For doing 'bare metal' embedded work in C you need the crt0 which is the weirdly named C startup code that satisfies the assumption the C compiler made when it compiled your code. And a set of primitives to do what the i/o drivers of an operating system would have been doing for you. And voila, your C program runs on 'bare metal.'
Another good topic associated with this is setting up hooks to make STDIN and STDOUT work for your particular setup, so that when you type printf() it just automagically works.
This will also then introduce you to the concept of a basic input/output system or BIOS which exports those primitives. Then you can take that code in flash/eprom and load a binary compilation into memory and start it and now you've got a monitor or a primitive one application at a time OS like CP/M or DOS.
Its a fun road for students who really want to understand computer systems to go down.
At my school, we did the following project : https://github.com/lse/k
It is a small kernel, from only a bootloader to running elf files.
It has like 10 syscalls if I remember correctly.
It is very fun, and really makes you understand the ton of legacy support still in modern x86_64 CPUs and what the os underneath is doing with privilege levels and task switching.
I even implemented a small rom for it that has an interactive ocarina from Ocarina of Time.
This is really neat. So many engineers come out of school without ever having had this sort of 'start to finish' level of hands on experience. If you ever want to do systems or systems analysis this kind of thing will really, really help.
What is your school? I thought it was the London School of Economics, but it’s another LSE.
It's EPITA, in France.
LSE is the System's laboratory of EPITA (https://www.lse.epita.fr/)
No BIOS necessary when we're talking about bare metal systems. printf() will just resolve to a low-level UART-based routine that writes to a FIFO to be played out to the UART when it's not busy. Hell, I've seen systems that forego the FIFO and just write to the UART blocking while writing.
I hope nobody was confused into thinking I thought a BIOS was required, I was pointing out the evolution from this to a monitor. I've written some code[1] that runs on the STM32 series that uses the newlib printf(). I created the UART code [2] that is interrupt driven[3] which gives you the fun feature that you can hit ^C and have it reset the program. (useful when your code goes into an expected place :-)).
[1] https://github.com/ChuckM/
[2] https://github.com/ChuckM/nucleo/blob/master/f446re/uart/uar...
[3] https://github.com/ChuckM/nucleo/blob/master/f446re/common/u...
This was my attempt at a minimal bare-metal C environment:
https://github.com/marssaxman/startc
That's awesome. Back in the day this was the strong point of eCOS which was a bare metal "platform" for running essentially one application on x86 hardware. The x86 ecosystem has gotten so complicated that being able to do this can get you better performance for an "embedded" app than running on top of Linux or another embedded OS. That translates into your appliance type device using lower cost chips which is a win. When I was playing around with eCos a lot of the digital signage market was using it.
Does anyone still do it that way?
With AMD64 style chips? Probably not. Multi-core systems really need a scheduler to get the most out of them so perhaps there are some very specific applications where that would be a win but I cannot think of anything that isn't super specific. For ARM64 chips with a small number of cores, sure that is still a very viable too for appliance type (application specific) applications.
This sounds fascinating and absolutely alien to me, a Python dev. Any good books or other sources to learn more you can recommend?
There's always the Minix book!
Newlib is huge and complex (even including old K&R syntax) and adapting the build process to a new system is not trivial. I spent a lot of time with it when I re-targeted chibicc and cparser to EiGen, and finally switched to PDCLib for libc and a part of uClibc for libm; see https://github.com/rochus-keller/EiGen/tree/master/ecc/lib. The result is platform independent besides esentially one file.
For a static library it does not matter whether it is huge and complex, because you will typically link into your embedded application only a small number of functions from it.
I have used a part of newlib with many different kinds of microcontrollers and its build process has always been essentially the same as a quarter of century ago, so that the script that I have written the first time, before 2000, has always worked without problems, regardless of the target CPU.
The only tricky part that I had to figure the first time was how to split the compilation of the gcc cross-compiler into a part that is built before newlib and a part that is built after newlib.
However that is not specific to newlib, but is the method that must be used when compiling a cross-gcc with any standard C library and it has been simplified over the years, so that now there is little more to it than choosing the appropriate make targets when executing the make commands.
I have never needed to change the build process of newlib for a new system, I had just needed to replace a few functions, for things like I/O peripherals or memory allocation. However, I have never used much of newlib, mostly only stdio and memory/string functions.
In school we were taught that the OS does the printf. I think the professors were just trying to generalize to not go on tangents. But, once I learned that no embedded libc variants had printf just no output path, it got a lot easier to figure out how to get it working. I wish I knew about SWO and the magic of semihosting back then. I don't think those would be hard to explain and interestingly it's one of the few things students asked about that in the field I'm also asked how to do by coworkers (the setting up _write).
220k just to include studio? That's insane. I have 12k and still do IO. Just without the overblown stdio and sbrk, uart_puts is enough. And only in DEBUG mode.
I thought this was going to talk about how printf is implemented. I worked with a tiny embedded processor that had 8k imem, and printf is about 100k alone. Crazy. Switched to a more basic implementation that was around 2k, and ran much,much faster. It seems printf is pretty bloated, though I guess typical people don't care.
In most C standard libraries intended for embedded applications, including newlib, there is some configuration option to provide a printf that does not support any of the floating-point format specifiers.
That is normally enough to reduce the footprint of printf by more than an order of magnitude, making it compatible with small microcontrollers.
Has anybody played with newlib, but grown the complexity as the system came together?
It seems like one thing to get a bare-bones printf() working to get you started on a bit of hardware, but as the complexity of the system grows you might want to move on from (say) pushing characters out of a serial interface onto pushing them onto a bitmapped display.
Does newlib allow you to put different hooks in there as the complexity of the system increases?
Newlib provides both a standard printf, which is necessarily big, and a printf that does not support any of the floating-point format specifiers.
The latter is small enough so that I have used it in the past with various small microcontrollers, from ancient types based on PowerPC or ARM7TDMI to more recent MCUs with Cortex-M0+.
You just need to make the right configuration choice.
You can always write a printf replacement that takes a minimal control block that provides put, get, control, and a context.
That way you can print to a serial port, an LCD Display, or a log.
Meaning seriously the standard printf is late 1970's hot garbage and no one should use it.
It’s a feature to rewrite your OS kernel on the fly.
It's 1990, maybe 1999, in embedded land.
While "newlib" is an interesting idea - the approach taken here is, in many cases, the wrong one.
You see, actually, the printf() family of functions don't actually require _any_ metal, bare or otherwise, beyond the ability to print individual characters.
For this reason, a popular approach for the case of not having a full-fledged standard library is to have a fully cross-platform implementation of the family which "exposes" a symbol dependency on a character printing function, e.g.:
and variants of the printf functions which take the character-printing function as a runtime parameter: this is the approach taken in the standalone printf implementation I maintain, originally by Marco Paland:https://github.com/eyalroz/printf
honestly love reading about this stuff - always makes me realize how much gets glossed over in school. you think modern cpus and all the abstraction layers help or just make things messier for folks trying to learn the real basics?
I always felt with these kinds of things you strip out `stdio.h` and your new API/ABI/blackbox becomes `syscall` for `write()`, etc.
I am coding RISC-V assembly (which I run on x86_64 with a mini-interpreter) but I am careful to avoid the usage of the pseudo-instructions and the registers aliases (no compressed instruction ofc). I have a little tool to generate constant loading code, one-liner (semi-colon separated instructions).
And as a pre-processor I use a simple C preprocessor (I don't want to tie the code to the pre-processor of a specific assembler): I did that for x86_64 assembly, and I could assemble with gas, nasm and fasmng(fasm2) transparently.
What's wrong with compressed instructions?
I was very confused by the title, expected someone writing their own printf — i.e. the part that parses the format string, grabs varargs, converts numbers, lines up strings, etc.
I'd have called it "Bare metal puts()" or "Bare metal write()" or something along those lines instead.
(FWIW, FreeBSD's printf() is quite easy to pluck out of its surrounding libc infrastructure and adapt/customize.)
FreeBSD’s printf is my goto, too! It’s indeed enormously simple to pluck out, instantly gives you full-featured printf, and has added features such as dumping memory as hex.
Funnily enough we're not even referring to the same one, the hexdump thing is in FreeBSD's kernel printf, I was looking at the userspace one :). Haven't looked at the kernel one myself but nice to hear it's also well-engineered.
(The problem with '%D' hexdumps is that it breaks compiler format checking… and also 'D' is a length modifier for _Decimal64 starting in ISO C23… that's why our hexdump is hooked in as '%.*pHX' instead [which still gives a warning because %p is not supposed to have a precision, but at least it's not entirely broken.])
Is it? Could you elaborate/provide links to examples of this?
What customization would it support? Say, compared to these options:
https://github.com/eyalroz/printf?tab=readme-ov-file#cmake-o...
[flagged]
What's the state of the art for cross-compiling in $CURRENTYEAR?
I just run gentoo and follow their cross-compile guide, as well as set up distcc; but that's for system packages.
TBH I'd use qemu if I had to make something work for arbitrary code.
https://wiki.gentoo.org/wiki/Crossdev
But there are others.
Well, I know it's pretty easy in Debian. (It's not completely pain-free if you need unpackaged third-party libraries and/or if you are cross-compiling from one uncommon architecture to another.)
Honesty probably zig cc.
> What's the state of the art for cross-compiling in $CURRENTYEAR?
Poopy garbage dog poop.
glibc is a dumpster fire of bad design. If you want to cross-compile for an arbitrarily old version of glibc then... good luck. It can be done. But it's nightmare fuel.
What sort of roadblocks do you run into? Just to get an idea for the flavor of problems.
Well there's two flavors to this. Building glibc and building a program that links against glibc. They're not entirely the same. And you'd think the latter is easier. But you'd be wrong!
It should be trivial to compile glibc with an arbitrary build system for any target Linux platform from any OS. For example if I'm on Windows and I want to build a program that targets glibc 2.23 for Ubuntu on x86_64 target that should be easy peasy. It is not.
glibc should have ONE set of .c and .h files for the entire universe. There should be a small number of #define macros that users need to specify to build whatever weird ass flavor they need. These macros should be plainly defined in a single header file that anyone can look at.
But glibc is a pile of garbage and has generated files for every damn permutation in the universe. This is not necessary. It's a function of bad design. Code should NEVER EVER EVER have a ./configure step that generates files for the local platform. EVER.
Read this blog post to understand the mountains that Zig moved to enable cross-compilation. It's insane. https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
Is the cross-compilation story any better with musl?
gcc and glibc SDKs are hell beyond anything sane.
Like it is done on purpose.
You're getting downvoted but you're not wrong. Well, it's not on purpose per se. It's just really really bad design from the 80s when we didn't know better. And unfortunately its design hasn't been fixed. So we have to deal with almost 40 years of accumulated garbage.
Aren't there some ideologically driven decisions to make proprietary extensions harder/impossible?