[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xmlblaster-devel] ldbc

Hello Peter,
great that it works so nice with ldbc.

Peter Bennett wrote:

I have run up ldbc...


with hsqldb and postgres. MySQL next.

There are significant changes required to allow ldbc
to be used.

Minor SQL syntax changes and table changes...

ldbc does not support cascading yet so I chopped that out.
Modified the delete routines to delete the right stuff.

I first fired it up with hsqldb and it worked fine after
some sql changes to comply with ldbc and then changed over
to postgres and run across an overflow error...

The uniqueId for XB_ENTRIES dataId failed the
range checks in ldbc >= 10^18.

That's a well known problem of the 1.7.2 release. The hsqldb people fixed it but it is not released yet. That's why hsqldb.jar in $XMLBLASTER_HOME/lib is build from CVS. Could it be that you have another jsqldb.jar somewhere in your classpath which is then internally used by ldbc ?

I hacked the uniqueID generation by dividing the number by
100... This removes a couple of zeros.
mmm I would not do that. This makes it "ununique".
(MsgQueueEntry.java, MsgQueueHistoryEntry.java and TopicEntry.java)

and it worked...

It is pure conincidence that it worked (or perhaps a slow machine).

I will try and figure out why it would work with postgres
using its jdbc and not with ldbc.

Anyway I think that ldbc looks very promising.

Apart from the changes needed to make xmlBlaster comply
it integrates seemlessly...

I.E. Change the jdbc list to only have org.ldbc.jdbc.jdbcDriver
(ldbc loads the real jdbc itself), modify the db url to read jdbc:ldbc:hsqldb:etc and set the mappings up...


ldbc.trace = off, on or detailed for debugging.


ldbc does not accept VARCHAR without a range specifier (128). blobVarName mapping required. There are alternative mappings as ldbc maps a whole lot to its types which then get mapped to the db's types.

Like mySQL, ldbc does not like the reserved word blob being
used as a field name.

If the overflow problem can be straightened out without having
to change the size of it when generated then it could be possible
to supply an ldbcManagerCommonTable plugin.

This would obviously obsolete the old way and the recent stuff with
mySQL and open up the number of db's supported to the list ldbc supports.

A bit of background on why the uniqueId is so big would be nice?
On my system it generates 19 digits I think.

Thats a unique timestamp: It is used whenever there is a need of a unique (per jvm) identifier: The current timestamp is taken as the more significant digits (12 digits). As less significant digits a one million counter is added (6 digits). So all together you have 18 digits making it possible to have on million new identities every millisecond. The 19th digit is the minus sign which is used in queues where an inverse sequence is needed.

Cutting off the less significant digits does not solve the problem.


Saluti Michele