Quantcast

Max number of files in a collection

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Max number of files in a collection

José María Fernández González
Hi everybody,
        I have been doing some tests with my huge test documents (now they are
taking about 8 GB), splitting them in smaller pieces. The problem now is
the number of generated "sub"documents, which is about 1 million. I have
discovered that they can't be stored together in a single collection
because an "Out of Memory" exception is fired (around  250000~260000
documents in the same collection) with the Java parameter -Xmx256000k.
If I double it (-Xmx512000), then I get over 520000 documents.

        I'm using an Athlon-64 with 2GB of memory, Gentoo Linux and I have
tested with various JVMs: Sun (1.5.0.04) and Blackdown (1.4.2.03); BEA's
jrockit (1.5.0.03) simply burst due an internal bug after a few tenths
of thousand documents; and at last I'm using the IBM one (1.4.2),
because it seems faster than the other ones, and it went further than
the others.

        I have been using both the XML-RPC interface and the local one to do
the tests. So, my question: is the max number of documents in a single
collection bounded by the Java memory? Is held (or pinned) in memory the
list of documents in the used collections?

        Best Regards,
                José María

PS: I'm now testing a with MORE  memory (-Xmx1536m). I'll tell my
results when it has finished...
--
José María Fernández González e-mail: [hidden email]
Tlfn: (+34) 91 585 54 50 Fax: (+34) 91 585 45 06
Grupo de Diseño de Proteinas Protein Design Group
Centro Nacional de Biotecnología National Center of Biotechnology
C.P.: 28049 Zip Code: 28049
C/. Darwin nº 3 (Campus Cantoblanco, U. Autónoma), Madrid (Spain)


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Max number of files in a collection

webhiker@tiscali.fr
Hi Jose,

I've been pretty frustrated as well with the lack of database support
for anything but Mickey Mouse sized repositories in the Java world.
While I cut my teeth on Xindice, and loved the automatic indexing and
speed of eXist, I found you will never get over about 80K documents of
average size in a database.

So I've been hard a work developing an alternative which has just seen
it's first release : Nxqd
It's a client server based version of the excellent Sleepycat native xml
database, which is itself built on Berkley DB.
The server is built as an Apache module, so no problems for scaleability
there, and the client is pure Java.

I have fundamental Xml:DB support, XPath, XQuery and all those good
things, and will soon have blob support etc.

You might want to check it out :

http://nxqd.sourceforge.net

We are looking for development help on the project, so feel free to get
involved.

wh

José María Fernández González wrote:

> Hi everybody,
>     I have been doing some tests with my huge test documents (now they
> are taking about 8 GB), splitting them in smaller pieces. The problem
> now is the number of generated "sub"documents, which is about 1
> million. I have discovered that they can't be stored together in a
> single collection because an "Out of Memory" exception is fired
> (around  250000~260000 documents in the same collection) with the Java
> parameter -Xmx256000k. If I double it (-Xmx512000), then I get over
> 520000 documents.
>
>     I'm using an Athlon-64 with 2GB of memory, Gentoo Linux and I have
> tested with various JVMs: Sun (1.5.0.04) and Blackdown (1.4.2.03);
> BEA's jrockit (1.5.0.03) simply burst due an internal bug after a few
> tenths of thousand documents; and at last I'm using the IBM one
> (1.4.2), because it seems faster than the other ones, and it went
> further than the others.
>
>     I have been using both the XML-RPC interface and the local one to
> do the tests. So, my question: is the max number of documents in a
> single collection bounded by the Java memory? Is held (or pinned) in
> memory the list of documents in the used collections?
>
>     Best Regards,
>         José María
>
> PS: I'm now testing a with MORE  memory (-Xmx1536m). I'll tell my
> results when it has finished...





-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Max number of files in a collection

wolfgangmm
In reply to this post by José María Fernández González
Hi,

you didn't tell us what eXist version you are using?

While it is true that the beta2 release had quite a few problems with
storing huge data sets, I would expect the current snapshot releases
to behave quite a bit better (though I always see more potential for
improvements). Recent code should scale well beyond the 30,000 docs
mentioned on the nxqd website ;-) given that not too much swapping
occurs.

We also have some more pending changes in CVS to speed up indexing and
improve caching. In particular, I've tried to further reduce the
amount of temporary memory allocated during the indexing. Given these
changes, I only see one remaining issue that could lead to
out-of-memory errors: the document metadata is still kept in memory
during indexing and - though the single document doesn't consume much
space - the sum of it could indeed become a problem for collections of
your size, so this may need to be addressed.

I acknowledge there are still many areas we have to work at to improve
general usability. During the past months, I put most of my efforts
into sponsored features like the new logging & recovery, so
performance issues came a bit short (yes, even for an open source
project, the selection of features to implement is mainly driven by
economical factors as we all have to earn our living).

From my point of view, the next crucial feature to improve support for
large data sets is a pre-evaluation optimizer for queries: as I see
it, even a simple query-rewriting optimizer could solve a major part
of the performance and memory problems I currently experience with
queries over huge node sets, so this should be the next top-priority
for the project.

Wolfgang


On 9/6/05, José María Fernández González <[hidden email]> wrote:

> Hi everybody,
>         I have been doing some tests with my huge test documents (now they are
> taking about 8 GB), splitting them in smaller pieces. The problem now is
> the number of generated "sub"documents, which is about 1 million. I have
> discovered that they can't be stored together in a single collection
> because an "Out of Memory" exception is fired (around  250000~260000
> documents in the same collection) with the Java parameter -Xmx256000k.
> If I double it (-Xmx512000), then I get over 520000 documents.
>
>         I'm using an Athlon-64 with 2GB of memory, Gentoo Linux and I have
> tested with various JVMs: Sun (1.5.0.04) and Blackdown (1.4.2.03); BEA's
> jrockit (1.5.0.03) simply burst due an internal bug after a few tenths
> of thousand documents; and at last I'm using the IBM one (1.4.2),
> because it seems faster than the other ones, and it went further than
> the others.
>
>         I have been using both the XML-RPC interface and the local one to do
> the tests. So, my question: is the max number of documents in a single
> collection bounded by the Java memory? Is held (or pinned) in memory the
> list of documents in the used collections?
>
>         Best Regards,
>                 José María
>
> PS: I'm now testing a with MORE  memory (-Xmx1536m). I'll tell my
> results when it has finished...
> --
> José María Fernández González           e-mail: [hidden email]
> Tlfn:   (+34) 91 585 54 50              Fax:    (+34) 91 585 45 06
> Grupo de Diseño de Proteinas            Protein Design Group
> Centro Nacional de Biotecnología        National Center of Biotechnology
> C.P.: 28049                             Zip Code: 28049
> C/. Darwin nº 3 (Campus Cantoblanco, U. Autónoma), Madrid (Spain)
>
>
> -------------------------------------------------------
> SF.Net email is Sponsored by the Better Software Conference & EXPO
> September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
> Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
> Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
> _______________________________________________
> Exist-open mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/exist-open
>


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Max number of files in a collection

Adam Retter-7
In reply to this post by José María Fernández González

Have you done much performance testing with eXist? e.g.
Hardware/Software combinations and JVM's?
I see you mention trying several different versions of Java, I would be
interested to know anything further about hardware/software selection
for optimal eXist performance...
Also are you using Tomcat as a container and if so which version?


Cheers Adam.

On Tue, 2005-09-06 at 18:31 +0100, José María Fernández González wrote:

> Hi everybody,
>         I have been doing some tests with my huge test documents (now
> they are  
> taking about 8 GB), splitting them in smaller pieces. The problem now
> is  
> the number of generated "sub"documents, which is about 1 million. I
> have  
> discovered that they can't be stored together in a single collection  
> because an "Out of Memory" exception is fired (around  250000~260000  
> documents in the same collection) with the Java parameter
> -Xmx256000k.  
> If I double it (-Xmx512000), then I get over 520000 documents.
>
>         I'm using an Athlon-64 with 2GB of memory, Gentoo Linux and I
> have  
> tested with various JVMs: Sun (1.5.0.04) and Blackdown (1.4.2.03);
> BEA's  
> jrockit (1.5.0.03) simply burst due an internal bug after a few
> tenths  
> of thousand documents; and at last I'm using the IBM one (1.4.2),  
> because it seems faster than the other ones, and it went further
> than  
> the others.
>
>         I have been using both the XML-RPC interface and the local one
> to do  
> the tests. So, my question: is the max number of documents in a
> single  
> collection bounded by the Java memory? Is held (or pinned) in memory
> the  
> list of documents in the used collections?
>
>         Best Regards,
>                 José María
>
> PS: I'm now testing a with MORE  memory (-Xmx1536m). I'll tell my  
> results when it has finished...
> --  
> José María Fernández González           e-mail:
> [hidden email]
> Tlfn:   (+34) 91 585 54 50              Fax:    (+34) 91 585 45 06
> Grupo de Diseño de Proteinas            Protein Design Group
> Centro Nacional de Biotecnología        National Center of
> Biotechnology
> C.P.: 28049                             Zip Code: 28049
> C/. Darwin nº 3 (Campus Cantoblanco, U. Autónoma), Madrid (Spain)
>
>
> -------------------------------------------------------
> SF.Net email is Sponsored by the Better Software Conference & EXPO
> September 19-22, 2005 * San Francisco, CA * Development Lifecycle
> Practices
> Agile & Plan-Driven Development * Managing Projects & Teams * Testing
> & QA
> Security * Process Improvement & Measurement *
> http://www.sqe.com/bsce5sf 
> _______________________________________________
> Exist-open mailing list
> [hidden email]
> https://lists.sourceforge.net/lists/listinfo/exist-open
>


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Max number of files in a collection

José María Fernández González
In reply to this post by wolfgangmm
Hi Wolfgang,

Wolfgang Meier wrote:
> Hi,
>
> you didn't tell us what eXist version you are using?
>

I'm using the latest snapshot (20050805).

> While it is true that the beta2 release had quite a few problems with
> storing huge data sets, I would expect the current snapshot releases
> to behave quite a bit better (though I always see more potential for
> improvements). Recent code should scale well beyond the 30,000 docs
> mentioned on the nxqd website ;-) given that not too much swapping
> occurs.
>

Well, I'm working on bioinformatics, and if at last we try to insert
some of the most important datasources (EMBL or RefSeq, for instance) we
can generate tenths of millions of documents!!!

> We also have some more pending changes in CVS to speed up indexing and
> improve caching. In particular, I've tried to further reduce the
> amount of temporary memory allocated during the indexing. Given these
> changes, I only see one remaining issue that could lead to
> out-of-memory errors: the document metadata is still kept in memory
> during indexing and - though the single document doesn't consume much
> space - the sum of it could indeed become a problem for collections of
> your size, so this may need to be addressed.
>

Yes, it could be. So, is it worth testing/working with the current
version in eXist CVS?

>
>>From my point of view, the next crucial feature to improve support for
> large data sets is a pre-evaluation optimizer for queries: as I see
> it, even a simple query-rewriting optimizer could solve a major part
> of the performance and memory problems I currently experience with
> queries over huge node sets, so this should be the next top-priority
> for the project.
>

        I guess it is so. I have been working with relational databases since
1996, and the query planner makes the difference between two RDBMSs
(PostgreSQL vs MySQL), or even two versions of the same RDMBS.

        Best Regards,
                José María

--
José María Fernández González e-mail: [hidden email]
Tlfn: (+34) 91 585 54 50 Fax: (+34) 91 585 45 06
Grupo de Diseño de Proteinas Protein Design Group
Centro Nacional de Biotecnología National Center of Biotechnology
C.P.: 28049 Zip Code: 28049
C/. Darwin nº 3 (Campus Cantoblanco, U. Autónoma), Madrid (Spain)


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Max number of files in a collection

José María Fernández González
In reply to this post by Adam Retter-7
Hi Adam,

Adam Retter wrote:
> Have you done much performance testing with eXist? e.g.
> Hardware/Software combinations and JVM's?

Well, I have tested eXist in quite different platforms since one year
and a half, and this is the historical order:

* Sun Ultra60 (only one 360MHz processor) with 512MB and a 9GB SCSI disk.
* Powerbook G4 500MHz (first generation) with 512MB and 20GB slow IDE disk.
* Athlon 1.4GHz with 1GB and 500GB SCSI array (with IDE disks) in RAID-3.
* Acer Laptop with Intel Centrino 1.5GHz, 512MB and 80GB fast IDE disk.
* Athlon 64 3200+ with 2GB and 120GB SATA disk.

IBM JVM implementation is veeery fast in their processors (G4, POWER4,
G5, POWER5). Sun JVM implementation eats *lots* of memory in Solaris
Sparc (twice or three times the used one in Intel platforms). Disk
performance and speed is crucial when you are using Sun JVM. I think
eXist performance can be improved if the transaction log file is located
in a partition in a different disk, because it is being synced each time
a transaction starts, or finished due rollback or commit. Also, if you
have well defined queries, it is worth disabling default full text
indexes over the whole document and creating specific ones (either
fulltext or not) for the elements/attributes/text you are querying.

And from JVMs point of view, what I have learnt is:

* In general, if you can choose between 1.4.2 and 5.0 releases, is
better the first one (talking about eXist). Perhaps the last one will
improve in the future, but now it tends to eat much more memory and is a
bit slower than 1.4.2.

* jrockit from BEA is not recommended, because version 1.4.2 is not able
to handle large files (bigger than 2GB), and although 5.0 is able to, it
has some bugs. As I told in a previous mail, it had a segmentation fault
at the beginning of my tests (it bursted!).

* Sun JVM is faster in I/O but IBM one wastes less memory.

> Also are you using Tomcat as a container and if so which version?
>

No, I'm not using it so. I'm running it in "server" mode.

        Best Regards,
                José María

--
José María Fernández González e-mail: [hidden email]
Tlfn: (+34) 91 585 54 50 Fax: (+34) 91 585 45 06
Grupo de Diseño de Proteinas Protein Design Group
Centro Nacional de Biotecnología National Center of Biotechnology
C.P.: 28049 Zip Code: 28049
C/. Darwin nº 3 (Campus Cantoblanco, U. Autónoma), Madrid (Spain)


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Max number of files in a collection

Adam Retter-7
In reply to this post by José María Fernández González
Great thanks thats really useful :-)

I specifically be interested to know if you found the Sun JVM faster on
Sun Hardware (and whether that was Sparc or AMD and What OS Version)
than on Intel hardware (and whether it was Windows or Linux and what
version) in comparative tests for eXist?

Basically I need to purchase a Server to run eXist on, I am wandering
whether I should buy from Sun, Apple, IBM or a Linux/Intel box.
I think I would prefer to stick to the Sun JVM as this seems to be the
standard and so would be easier for support queries...

Cheers Adam.

On Wed, 2005-09-07 at 18:47 +0100, José María Fernández González wrote:

> Hi Adam,
>
> Adam Retter wrote:
> > Have you done much performance testing with eXist? e.g.
> > Hardware/Software combinations and JVM's?
>
> Well, I have tested eXist in quite different platforms since one
> year  
> and a half, and this is the historical order:
>
> *       Sun Ultra60 (only one 360MHz processor) with 512MB and a 9GB
> SCSI disk.
> *       Powerbook G4 500MHz (first generation) with 512MB and 20GB
> slow IDE disk.
> *       Athlon 1.4GHz with 1GB and 500GB SCSI array (with IDE disks)
> in RAID-3.
> *       Acer Laptop with Intel Centrino 1.5GHz, 512MB and 80GB fast
> IDE disk.
> *       Athlon 64 3200+ with 2GB and 120GB SATA disk.
>
> IBM JVM implementation is veeery fast in their processors (G4,
> POWER4,  
> G5, POWER5). Sun JVM implementation eats *lots* of memory in Solaris  
> Sparc (twice or three times the used one in Intel platforms). Disk  
> performance and speed is crucial when you are using Sun JVM. I think  
> eXist performance can be improved if the transaction log file is
> located  
> in a partition in a different disk, because it is being synced each
> time  
> a transaction starts, or finished due rollback or commit. Also, if
> you  
> have well defined queries, it is worth disabling default full text  
> indexes over the whole document and creating specific ones (either  
> fulltext or not) for the elements/attributes/text you are querying.
>
> And from JVMs point of view, what I have learnt is:
>
> *       In general, if you can choose between 1.4.2 and 5.0 releases,
> is  
> better the first one (talking about eXist). Perhaps the last one
> will  
> improve in the future, but now it tends to eat much more memory and is
> a  
> bit slower than 1.4.2.
>
> *       jrockit from BEA is not recommended, because version 1.4.2 is
> not able  
> to handle large files (bigger than 2GB), and although 5.0 is able to,
> it  
> has some bugs. As I told in a previous mail, it had a segmentation
> fault  
> at the beginning of my tests (it bursted!).
>
> *       Sun JVM is faster in I/O but IBM one wastes less memory.
>
> > Also are you using Tomcat as a container and if so which version?
> >
>
> No, I'm not using it so. I'm running it in "server" mode.
>
>         Best Regards,
>                 José María
>
> --  
> José María Fernández González           e-mail:
> [hidden email]
> Tlfn:   (+34) 91 585 54 50              Fax:    (+34) 91 585 45 06
> Grupo de Diseño de Proteinas            Protein Design Group
> Centro Nacional de Biotecnología        National Center of
> Biotechnology
> C.P.: 28049                             Zip Code: 28049
> C/. Darwin nº 3 (Campus Cantoblanco, U. Autónoma), Madrid (Spain)
>


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Max number of files in a collection

Tom Wern
In reply to this post by wolfgangmm
Wolfgang Meier <wolfgangmm <at> gmail.com> writes:

>
> Hi,
>
> you didn't tell us what eXist version you are using?
>
> While it is true that the beta2 release had quite a few problems with
> storing huge data sets, I would expect the current snapshot releases
> to behave quite a bit better (though I always see more potential for
> improvements). Recent code should scale well beyond the 30,000 docs
> mentioned on the nxqd website  given that not too much swapping
> occurs.
>
...

After looking through the mailing list regarding "large" document collections
am I correct in concluding that a collection of 300,000 - 400,000 documents,
each relatively small (<5K in size) is reasonably managable with eXist?

Thanks.

Tom  




-------------------------------------------------------
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Re: Max number of files in a collection

José María Fernández González
Hi Tom,
        well, if you have enough free memory, yes. I tried last week inserting
around 1,200,000 documents, and eXist ate about 1,6GB of memory! Each
time I had to restart the server, it spent some minutes reading the
collection listing at the first access. I tried some strategies in order
to shorten the memory usage, like spreading the files over
subcollections, but it didn't work. This behavior happens both in all
running modes: local, server and startup.

        My workaround has been to create thousands of XML documents which have
each one the content of thousands of the original ones.

Tom Wern wrote:
>
> After looking through the mailing list regarding "large" document collections
> am I correct in concluding that a collection of 300,000 - 400,000 documents,
> each relatively small (<5K in size) is reasonably managable with eXist?
>

        Best Regards,
                José María

--
José María Fernández González e-mail: [hidden email]
Tlfn: (+34) 91 585 54 50 Fax: (+34) 91 585 45 06
Grupo de Diseño de Proteinas Protein Design Group
Centro Nacional de Biotecnología National Center of Biotechnology
C.P.: 28049 Zip Code: 28049
C/. Darwin nº 3 (Campus Cantoblanco, U. Autónoma), Madrid (Spain)


-------------------------------------------------------
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Re: Max number of files in a collection

Wolfgang Meier-2
> well, if you have enough free memory, yes. I tried last week inserting
> around 1,200,000 documents, and eXist ate about 1,6GB of memory! Each
> time I had to restart the server, it spent some minutes reading the
> collection listing at the first access. I tried some strategies in order
> to shorten the memory usage, like spreading the files over
> subcollections, but it didn't work. This behavior happens both in all
> running modes: local, server and startup.

The cvs version should improve memory usage and avoid out-of-memory errors
during storage. There's still a small caching problem though: the cache for
the collection data doesn't grow yet as it should, and reading the collection
metadata does indeed take too long with so many files.

Wolfgang


-------------------------------------------------------
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP.  Click here to play: http://sourceforge.net/geronimo.php
_______________________________________________
Exist-open mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/exist-open
Loading...