Re: RAD vs. performance
Robbert Haarman wrote:
> On Wed, Jun 28, 2006 at 11:58:30AM +0100, Jon Harrop wrote:
> > If you want your code to work then you must take the differences between
> > ints and floats into account. This is a really simple concept in
> > programming. Why are floating-point loop variables taboo? What are the
> > semantic differences between "+" for ints and floats (hint: a+b+c=a+c+b for
> > ints but not for floats)? Why is it ok to compare ints for equality but not
> > floats? This is really basic stuff...
> Yes, but only because some, not even all, programming languages make it
> that way.
No, this is a fundamental limitation of computers. This has nothing to
do with programming languages per se.
> I'm not sure that if you step back and consider it from a
> distance, this is the way you want it to be.
Yes, I would love to have a computer that was infinitely powerful but I
cannot have one so I must put up with finite-precision arithmetic.
Consequently, I must be aware of the capabilities and limitations of
> I know that I prefer
> numbers to be just numbers, whether or not they are integers, positive,
> transcedental, or representable in 36 bits. If I don't have to care
> about the internal representation, so much the better.
You do have to care about the internal representation. That is my
point. Trying to brush it under the carpet by providing a consistent
interface is asking for trouble.
> To cut a long
> story short: there _should_ be a uniform interface to numbers.
> > Again, this is a practically important case (e.g. it appears in XML-RPC
> > bindings) that is solved in statically typed languages by writing an IDL
> > compiler. The compiled code would just raise a run-time type error
> > exception:
> But wouldn't you agree that writing an IDL compiler that generates code
> so that run-time type errors are signalled amounts to a workaround? In a
> dynamically typed language, you could just feed the above code through
> the compiler, and it would emit the code for checking that say_hi()
> applies to object at run-time. In a statically typed language, the
> compiler would reject the program. So you work around that by writing
> another compiler that writes all the extra code for you, eventually
> ending up with what the dynamically typed language gave you to begin
No, it has the advantage that the rest of your program can be
statically type checked. If most of your code is not in the dynamically
typed interface, then that benefit is worth having, otherwise it is
> > Has anyone ever made a compiler for a dynamically typed language that gets
> > performance comparable to that of statically typed languages without having
> > massively longer compile times?
> I don't know. I think so. But does it really matter?
Of course it matters. I'm not about to ditch statically typed languages
with compilers that exist and work because someone thinks that it might
be possible to write a compiler for a dynamically typed language that
offers the same benefits. Especially when all the evidence indicates
that this is wrong.
> What's important
> isn't if anyone _has_, but if it's _possible_. I think it is,
I think it isn't even theoretically possible. Just look at the work
Stalin has to do to get decent run-time performance.
> > I don't think that is theoretically possible so I'll believe it when I see
> > it. How well does your system handle higher-order functions, for example?
> Again, if a stlc can infer types, so can a dtlc.
You can't infer the information that is available to an ML compiler
without also requiring the same declarations (e.g. variants), in which
case you've lost the theoretical brevity of dynamic typing and you've
basically got a poor-man's static type system.
Any way you cut it, if you "develop" a dynamic type system to provide
the benefits of a static type system then you've reinvented the static