In the beginning was the computer - of various shapes and sizes.
They got faster and more powerful and many languages and script languages
evolved.
There are lots of interesting problems one can set yourself as a
developer - experimenting with algorithms and graphics.
I come from a time when 4K of memory was large enough (the
early Z80 micros), and 256K on the bigger minis was huge.
I am always perplexed when staring at my browser - whether its
slashdot, engadget or twitter. The algorithmic part of these
sites and programs is mostly irrelevant - pretty visuals and icons
along with Web2.0 style auto updates.
A long time ago, it was easy to write your own browser from scratch -
write an HTML parser, break the text up in to boldened sections
and hyperlinks. Thats a great way to learn to program, by the way.
On real web sites, the HTML is not clean and the html can suffer
from imbalanced markups. A real web browser has to decide how to handle
the inconsistent and illogical real world examples of non-conforming HTML.
And the XHTML and w3c initiatives to define "correct" HTML was abandoned.
Assuming you got so far to render reasonable examples of HTML, you
half targetted maybe less than 0.5% of a real web browser.
CSS and Javascript, multiple tabs, non-blocking APIs, network
connections, caching, and animated visuals - each of these is
a large project in itself. In fact, for CSS + Javascript, everyone
has abandoned all versions of implementation and pretty much settled
on webkit (Safari, Opera, Android - although Windows and Firefox hold
out on their own implementations; apologies to Firefox if I got that
wrong).
All the browsers are reasonably huge as binary downloads, and are
astonishingly huge (and impressive) as source code downloads. Few
people attempt to compile a browser from source.
What this demonstrates is the power of OO coding and class libraries which
do well defined things. Its a long time since I looked at the code
of a browser - there is a lot of good stuff in there, but its so
huge, few people can begin to understand much more than a handful
of disconnected methods.
Its a bit like going from a mud-hat to a skyscraper in terms of
technical achievement - such that now, nobody tries to build
a brand new skyscraper - they just take the existing model of a skyscraper
and apply small changes to do something new.
As I write this, I am staring at my twitter page in one of my many
tabs in the browser, realising that the layer upon layer of stuff to
make that page happen uses ever more resources to do it - even a
high powered machine with a lot of RAM struggles to make the experience
responsive. (Twitter said there were 1200+ new tweets, and an attempt
to load them made Firefox time out and suggest the javascript on the
page was unresponsive).
Many code optimisations in a mature program may lead to tens of percent
performance increases, and CPU speeds are only going up by 10-50%
per year, yet a small javascript piece of code can use up tens of thousands
of percent more resources - so its a failed arms race for cpus, compilers
or web browsers to get ever more faster. The nature of the
entire stack of software development is to effectively stop thinking
of low level optimisations and let people do things like load
thousands of tweets into a page (and this is not twitters fault). I
am guilty of writing HTML pages with a couple hundred thousand rows
in the table (this is useful when doing initial analytics - see how bad
the problem is, before deciding how to avoid displaying 200k rows
on a single web page). [The solution is a REST interface or a CGI type script on
the server to allow pagination of results - but that is ugly
for large data sets].
What we have is a situation today where the volume of data to
look at (and, in the case of facebook or twitter, it has your
attention for maybe 1/10th of a second) is huge, and most of the software
industry spends its effort trying many different ways to visualise
that data.
Post created by CRiSP v11.0.16a-b6565
No comments:
Post a Comment