Cast not your Perl before swine?
levi at cold.org
Mon Dec 17 23:27:32 MST 2007
"Bryan Sant" <bryan.sant at gmail.com> writes:
> I'm not all the way through the report. But some things already smell funny.
You are clearly reading this from the wrong perspective. These guys
are supporters of garbage collection; they aim simply to measure what
its cost is. There's not a whole lot of quantitative research on the
subject, because it's a hard thing to measure. Pretty much all of the
research does conclude that there is a runtime cost, though. Most of
it also agrees that although the time cost is minimal, the space cost
of garbage collection can be significant. This paper also shows that
the time cost is dependant on having enough space for the collector to
> As far as I can tell, they are simulating a 2Ghz G4 PowerPC system on
> a 1Ghz PowerPC physical machine. What? Why? Also they are using
> ancient GC algorithms which aren't used by Java or .NET. They are
> running on the Jikes RVM (a research JVM written in Java that runs on
> an actual Java VM) so the cache hit rate is going to be effected.
> They aren't using a JIT compiler that could hoist small object values
> into registers.
So? That's largely irrelevant to the issue of garbage collection
vs. explicit memory management. They're presumably doing it that way
because it's the environment they're equipped to instrument to get
> Worst of all they didn't rewrite the test applications in an
> explicitly managed language for an honest comparison. They instead
> just recorded the times that the JVM collected the objects and
> implanted a call to "free" in their place -- how convenient! I would
> love to have those runtime calculated decisions known at compile time
> for a language like C. However, that's not possible unless your
> mocking out a contrived test like these guys did. So this is
> completely unrealistic when writing a true explicitly managed
> application. In a real C app one would have to manually track the
> lifetime of a struct/array and know when it is safe to free. The most
> common case is doing much more frequent malloc/free combinations to
> prevent memory leaks or dangling pointers, etc.
The locations where they simulated calling free weren't calculated at
compile time, they were calculated at run time for a specific run of
the application. It's not a problem generally amenable to static
analysis, or we wouldn't need garbage collectors in the first place!
Anyway, they do state that the manual allocation simulation is an
optimal one, so there is some bias that way. However, doing it that
way is an interesting counterpoint to the typical route one uses to
measure GC effects, which is to add a conservative GC to a standard
C/C++ program and turn free() into a noop. I feel this way is a
better determination of the cost of GC in Java, since programs are
written in Java style rather than C/C++ style. Rewriting the programs
would add a lot of uncontrolled variables to the study.l
> At any rate, I think these guys raise some interesting points about
> the theoretical advantages explicit memory management has, but in
> practice, it just isn't true. Essentially they are cheating by using
> a hybrid process that gets the best of both worlds. In most cases,
> persons writing software in C/C++ will end up sacrificing more time in
> malloc/free calls, will have less contiguous heap space and thus will
> hit the L2 cache less often, and will rarely know when to push their
> objects into registers when possible compared to a GC'd language.
Again, you're misinterpreting their intent. It's not in question that
decent explicit memory management nearly always has better time/space
performance than garbage collection. Modern GC alleviates some of the
issues, but often creates new ones. It's still a bit of a black art,
especially when you have to deal with interactions with CPU
characteristics and OS paging systems. Their goal was to understand
the nature of the costs of GC, and I think their approach was sound
and interesting. Hopefully it will direct further research into
improving GC. Some of their other research involved getting the OS
into the picture to alleviate the poor interaction with paging.
Not everyone is out to bash Java and garbage collection. These guys
certainly weren't, and you don't need to feel compelled to get all
defensive with them as you typically do with Java-bashing PLUGgers. :)
> However, I will agree that the bit about hitting the OS swap partition
> is absolutely true. If your language uses a GC and you hit the swap
> file, it is absolute death to performance. GC has costs, but those
> costs are paid for (and then some) by having contiguous memory that
> will likely be found in L2 cache when needed. If you can't find your
> objects in cache that hurts... If you can't find them in main memory
> at all... That REALLY hurts because the GC visits the entire heap
> space on a full collection, thus you have to read ALL of your heap
> back into main memory if you ever swap.
The benefits of having data in contiguous memory are typically offset
by the fact that doing liveness analysis tends to pull in stuff from
all over the heap, blowing the cache out quite effectively. This is
especially painful when you have to pull in pages from swap just to
check their pointers.
> Region-based memory management is awesome. That is basically what
> these guys were mimicking with this test (well, actually even better
> because they didn't have to do any object tracking whatsoever). Do
> you use region based techniques in your embedded code?
I haven't, but Cyclone, a C variant for embedded systems, uses
region-based memory management for some memory. It's also got a lot
of other very nifty features. I might use it in the future, but I
haven't had occasion to yet.
More information about the PLUG