Dynamic linking has taken a lot of flack from the go team, and they present some pretty irrefutable points, and yet I still feel like this is a necessary solution in some cases. Perhaps dynamic linking as we’ve all come to know it isn’t the correct answer, but a newer similar solution would be.
My proposal would be to design a style of dynamic linking that is based explicitly on function’s signatures rather than just symbols, just as is in static linking. If a black box is implemented to guaranty a certain output based on a given input, then it should be fair to hold whomever implements a library responsible for incompatible version changes. I would also like to see programmers take more care in ensuring the same library could be equally easily linked statically as dynamically–but that’s a different topic.
Security: Take go’s fastcgi package for example. If you’re running a standalone instance using the fastcgi library, and a vulnerability comes out in that library, you will be required to upgrade every single package that utilizes it. To programmers who crave a world rid of shared libraries, that might not be a big deal, but imagine this from an intermediate sys admin’s perspective. Depending on your operating system’s distribution, you will have to either download new binaries for every one of these softwares, or else you will have to maintain sources for every single package you run in production, and remember to recompile all of them whenever this library is updated. I agree that in a lot of cases Dynamic linking is overkill and doesn’t save enough in disk space or can potentially use more memory and longer exec times, but this is one case where I’d rather see the library maintained separately from the solution.
Modular Applications: How would you go about writing something like the Apache Web Server, or the Pidgin Internet Messenger? Both of these applications have a daunting supply of official and user-contributed plug-ins made both efficient and possible by dynamic linking. The best alternative I can think of would be to rely heavily on some IPC techniques, which seems less efficient… but what do I know.
The Runtime: I think this should be optional at compilation. For example, Debian does a wonderful job maintaining consistent dependencies on glibc without breaking things, so why should that privilege be revoked? Why would we want tons of 1 to 3 MB binaries for trivial operating system tools (echo, cat)? On the other hand, statically linking the run-time (which I don’t see an obvious means to do in c++), does serve a great advantage for binary portability. This provides a fantastic edge for proprietary manufacturers.
In my brief and naive research, I got the impression that one of the downfalls of dynamic linking is that the entire library is loaded into memory, whereas otherwise only the required components are. And in the same article, it notes that shared libraries loaded by many applications can be loaded into memory just once. While this of course still implies that the entire library is loaded into memory, this could still save some memory over time. However, I wonder if this is only made possible by the fact that glibc is always loaded dynamically, and therefore it is able to coordinate that particular sharing of memory–or perhaps I’ve misinterpreted something altogether.
On the other hand, how necessary is it that we save that megabyte of RAM (or even disk space) this day in age? This topic is a growing internal conflict of mine, but I still believe that making software as resource-efficient as possible should be a much higher priority than it is. Any serious implementation should consider the possibility of running on small, low-power, low-memory embedded devices. While flash memory is so inexpensive, there’s still a pretty active scene for modifying consumer wireless routers which lacks that resource.
All in all, I already see Go as a fantastic opportunity for developers who want to be able to write powerful and flexible applications on a smaller and more efficient development and run-time stacks.