Hello everyone,
We are evaluating Monet right now to store and analyze a huge quantity of data.
We have several facts collection that range from 1 million row, 200 cols. to
way over 300 million rows and over 10.000 columns.
We are aware of memory limitations of current Monet version and looking forward
to the new one but in the meantime we'd like to do some performance and load
tests.
In order to know which data we will be able to store, we need to know how much
memory will use a BAT depending on the datatype and the number of elements we
want to store in it.
Is there a lineal relation between those variables (ie. 8 bytes oid, 8 bytes
int => 16 bytes per element) or do Monet encode data using intervals or such?
Thank you in advance and kudos for your big efforts,
Guillermo Arbeiza,
Open Sistemas de Información e Internet
garbeiza(AT)opensistemas.com
----------------------------------------------------------------------
Mensaje enviado por el servidor de openSistemas (www.opensistemas.com)
Specifies the volume, in hundredths of decibels. And it looked as if he was going to.
For more information on Oracle products, refer to your Oracle documentation set. One was given away in a 'Spirit of the Age' competition, but as there was no maintenance agreement, it was never heard of again.
Sample files, and others. Select the share for which you want to add FTP client access, and then choose Properties.
Java Applet Marquee Wizard 10.
The cache items with this priority level will not be deleted from the cache as the server frees system memory.
Kirsch, and John W.
Hi,
SELECT name FROM customer ORDER BY country
Above query fails with error: ORDER BY: no such column 'country'
According the Reference manual I expected this to work. Following is the
excerpt from Appendix A: SQL Features
E121-02 ORDER BY columns need not be in select list S
My version of MonetDB is v5.0.0._beta1_2.
Please let me know if this is expected behaviour or am I missing something.
Thanks,
- Venkatesh
Hi,
Is there any relation between CPU, RAM and the amount of data that can be
loaded into a MonetDB database?
I am trying to load 13.4 million rows of data and I am getting "Segmentation
Fault" on a Linux box running on 32 bit CPU with 4GB RAM.
I had similar issues on a Windows box too?
Thanks,
- Venkatesh
Hi all,
Last week I've finished refactoring the JDBC code base, such that most
of its non-JDBC specific functionality is available through a separate
library. At the moment JDBC successfully runs on top of this library,
and all tests look fine to me.
The library can be found as the monetdb-1.0-mcl.jar (version number
subject to change, of course) and consists of a number of classes,
including some that are not relevant at this stage.
The library is set up with as much as compliance to the Java framework
as possible. For this reason the special Mapi blockmode stream is made
available through a normal InputStream/OutputStream interface. On top
of these streams, normal Readers/Writers can be created. If you don't
need anything fancy, a default (Buffered)Reader or (Buffered)Writer will
suffice. Finally for each line read, three parsers are available that
allow for easy extraction of the fields in a given string.
nl.cwi.monetdb.mcl.net.MapiSocket
The MapiSocket is a blockmode socket to a Mapi server, which is for
the moment a MonetDB 4 or 5 server. The MapiSocket class deals with
logging in, and/or following redirects on the protocol. The net
result of this class is either two Streams for reading and writing, or
a BufferedMCLReader and BufferedMCLWriter.
nl.cwi.monetdb.mcl.io.BufferedMCLReader
nl.cwi.monetdb.mcl.io.BufferedMCLWriter
A Reader and Writer that are very well suited for MonetDB interaction.
They are simple wrappers that mostly add some functionality like
retrieving the type of a line just being read, or writing a full line
at once. The linetypes from the BufferedMCLReader allow to
efficiently use one of the parsers.
nl.cwi.monetdb.mcl.parser.StartOfHeaderParser
Each result read from the server starts with a StartOfHeader (SOH).
This SOH consists of a number of fields depending on its type. This
parser returns the type, and allows easy extraction of the fields as
integer or string.
nl.cwi.monetdb.mcl.parser.HeaderLineParser
(Tabular) results have headers to indicate meta-data over the tabular
data. These headers can be easily accessed through the
HeaderLineParser. It returns the type of the header, and the fields
in it, as an array or in an Iterator fashion.
nl.cwi.monetdb.mcl.parser.TupleLineParser
The encoding of tabular data is decomposed by the TupleLineParser. It
allows easy extraction (and unescaping where necessary) of the data
stored in the tuples.
With this framework it should be easy to e.g. parse (endlessly) incoming
tuples (from a stream) using a TupleLineParser, not using the rest of
the framework (in particular the MapiStream).
About me is murmur and hum, and I know it for the gnat- swarm of the living, piping for a little space its thin plaint of troubled air.
AN ALLE FINANZINVESTOREN!
DIESE AKTIE WIRD DURCHSTARTEN!
FREITAG 20. APRIL STARTET DIE HAUSSE!
REALISIERTER KURSGEWINN VON 400%+ IN 5 TAGEN!
Symbol: G7Q.F
Company: COUNTY LINE ENERGY
5 Tages Kursziel: 0.95
Schlusskurs: 0.21
WKN: A0J3B0
ISIN: US2224791077
Markt: Frankfurt
LASSEN SIE SICH DIESE CHANCE NICHT ENTGEHEN!
G7Q WIRD WIE EINE RAKETE DURCHSTARTEN!
UNSERE ERWARTUNGEN WIRD G7Q.F UBERTREFFEN!
Resets the property to the default value.
Hi,
I just started investigating the usage of MonetDB with our relatively large
XML document(s). It can go up to 1 million nodes in a tree type structure,
like (simplified example)
<item id="1" key="32">
<item id="2" key="34"/>
<item id="3" key="32">
<item id="4" key="32"/>
<item id="5" key="342">
<item id="6" key="32212" />
</item>
</item>
</item>
<item id="7" key="2">
<item id="8" key="654"/>
</item>
</item>
<item>
......
</item>
.....
The key refers to a another document that contains the actual data (if all
the items in the tree are unique it contains the same amount of nodes as the
tree, but may be a lot less) , basically a flat list of the item data (name
etc..).
<data key="32">
<name>hello</name>
<description>this is a test item</description>
</data>
<data key="342">
....
</data>
The general issue is to get the tree location and key value from the tree.
With a basic query like "//item[@id='33']" it takes about 3.5 seconds to get
the item, and about as long to get the metadata from the other document.
Using prepared queries had no effect (at least any improvement). The machine
is a Intel(R) Xeon(TM) CPU 2.80GHz on Linux with 3Gb of memory. Is this
level of performance expected or am I missing something essential?
Another issue was when I tried to run a query through the JDBC driver. It
dies with an exception (does it time out?):
Exception in thread "main" java.lang.AssertionError: block 0 should have
been fetched by now :(
at nl.cwi.monetdb.jdbc.MonetConnection$ResultSetResponse.getLine(
MonetConnection.java:1730)
at nl.cwi.monetdb.jdbc.MonetResultSet.absolute(MonetResultSet.java
:188)
at nl.cwi.monetdb.jdbc.MonetResultSet.relative(MonetResultSet.java
:2132)
at nl.cwi.monetdb.jdbc.MonetResultSet.next(MonetResultSet.java:2099)
at TestCache.monetDBTest(TestCache.java:99)
at TestCache.main(TestCache.java:180)
I tried to find an item with its description like
"//data/description[contains(.,"test")]" The query runs around 5 seconds
with the MapiClient.
The MonetDB was downloaded and compiled yesterday (Monet Database Server
V4.16.2). My XPath skills are less then admirable so all suggestions are
appreciated.
Cheers,
--
Tatu Lahtela <lahtela(a)gmail.com>