The excellent article by Drew Crawford
I thought I would add a comment, since performance has always
been one of my hobbies. Performance is what drives a lot of
programmers - how to optimise code to be small or efficient.
(Nowadays there is a 3rd axis to this - how to be power-efficient).
When coding, one starts with simple code and algorithms. Code is blindingly
fast because a lot of the bureaucratic "noise" is omitted. This "noise" is
what is responsible for error checking and validation, and creeps into
code over time, to address issues - e.g. buffer overflows, race conditions
For small pieces of code or algorithms, it is possible to be close to
optimal. Eg "sorting" is a classic standard algorithm - people rarely
implement their own but use the classes in their language of choice to do
this. Sorting is complicated - you either optimise for the small set
scenario or the large set, and the differing algorithms have different
best/worse/average case performance and/or memory use.
As a simple example of this. Consider 256x256 bit multiplication. 256 bits
is longer than any normal processor word size, so you could just implement
multiplication as a series of shifts and adds, or you could just create
a massive 2^256 square table and index directly into the table to get
O(1) performance (at a cost of memory which is infinite - 2^256 == 10^80,
i.e. more atoms than in the universe, assuming my maths is sane).
can do anything with it (implement an x86 emulator, Linux emulator,
MSDOS emulator, or a full blown graphics package - all have been done).
Yes, it can be done. In each case, one has to ask "why?"
Drew's article makes excellent comparisons about memory use and why
you will do whatever it takes. With millions of programmers, the boundaries
have yet to be found, but the cost is enormous - our battery life, the
power used by our laptops, and the fact that 8GB RAM is really a minimum
when running a desktop browser.
based web pages is very powerful, and easy to change when you change your
mind. Doing the same in C#, Java, C++ is painful - assuming you
have a stable graphics library. (FYI, CRiSP negates the standard
visual class libraries and implements its own; it had to - since none of these
existed way back when CRiSP was being born; this in turn, has lead to CRiSP
being available across platforms and can work anywhere - there is no
requirement to install package X to run it. On the other hand
this adherence to basic principals come with an implementation price,
and CRiSP needs surgery - it actually needs to use more memory and CPU
to compete with its competitors; CRiSP still fits into the iCache of
most CPUs and is fast in many many areas).
CRiSPs "speed" is impressive (hey, this is an advert, after all !)
But there are trade offs. There are things which CRiSP will perform badly
at (if anyone is interested, I can demonstrate some of them). These
deficiencies are similar to many other apps. For example, in Perl it is
easier to regexp matches than direct string comparisons. Thats simple horrible -
the Perl interpreter has to work overtime to optimise the regexp-that-isnt
cases (and it does a good job). But this "costs".
CRiSP does software virtual memory - allowing huge files to be
edited, without needing the same amount of memory proportional to the file.
For small files you dont notice this; for large files, the I/O overhead
exceeds your ability to notice what is going on. But some pathological
cases will show whats going on.
As an experiment, I test out many competing text editors to see
"how they are implemented" and its not difficult to determine how,
by suitable probing and pathological test. Most software evolves over
time so that it is doing things the original authors never intended.
(Consider Excel - a complaint for many years was its inability to edit
files with more than 64k rows - why would anyone do that?! But they do
because this data comes from a database or other source; many people have
purchased CRiSP exactly because of Excel's limitation).
It is admirable that mobiles can run in 512MB-2GB of RAM, but most
of us know the annoying design decisions which prevent us, the users
from doing a better job of optimising our workset than the generic
algorithms in iOS or Android (which in turn do a good job, but never
quite good enough). Most of the time this doesnt matter. And in 2013,
the minimum spec mobile is impressive. If one wrote a web app
2 years ago, the hardware has moved on - what was unlikely a while ago
is now "the norm". (This is no comfort for certain websites which
consume huge amounts of bandwidth or waste the users time with
a huge graphics oriented page, which contains one or two paragraphs
of text following by a "Next" button to take to another equally
obnoxious page with no information content.
Some sites like www.theregister.co.uk, are good - the information
density is palatable. Slashdot, after many false starts has a better
mobile site than the desktop version (no server side fetches to read
the story and comments, and the annoying sideways scrolling problem has
disappeared). Hm. Just tried http://m.slashdot.org, but that fails
on my desktop browser.