Main Page
What is this "wiki" thing?
If you're new to Wiki's, please read a brief explanation of what they are. You might then want to continue to this short introduction video.
We use this Wiki for easly-updatable, easily-expandable and user-editable MonetDB-related content, as well as for some coordination within our group.
Consult the MediaWiki User Guides page for detailed guides on using various aspects of MediaWiki (the platform on which this Wiki is built).
Getting started with MonetDB
- MonetDB:Getting started
- MonetDB:Building from sources
- MonetDB:Building from sources on OS X
- MonetDB:Installing on OS X
- MonetDB:Various tips
Internal
Software development
Organization
- Sinterklaas programming challenge
- Sinterklaas score board
- Thursday Think Tank
- Conferences
- Astronomy: Bulk Source Association
SciLens cluster
The SciLens cluster has been acquired to provide a sizable and flexible experimentation platform for the Database Architectures group at CWI. Access is granted to members of the DA group.
SciLens usage policy
In case you are cooperating with non-DA-members that are (temporarily) granted access to our SciLens cluster (there are no non-DA people that have access to our SciLens cluster without cooperation with us!), please make sure to (1) inform them about our usage (reporting/claiming) policies (see above) and (2) introduce them and their work to the group
If you plan to use machines in our SciLens cluster please make sure you report your usage (plans) and claim machines the usual way via the webpages below. Please do not forget to release them, again, once you're done with using them! Do not hesitate to ask, in case you have any questions!
SciLens backup
The data on the SciLens machines is not backed-up by ITF. To ensure a continual use of your data, you either have to find a backup storage place within the cluster, your desktop, external, or be able to re-generate your settings. In case of doubt or advice where and how to make your backup, contact our system administrator, Arjen de Rijke.
Logging into the cluster
Your first login
Logging into the cluster is regulated through the scilens2-ssh (virtual) machine. Initially, you _cannot_ access it; please ask Arjen for access. With his help you'll obtain a special license file for use with the cluster, which you can place at $HOME/.ssh/id_rsa_scilens on your machine.
Thereafter you can SSH to the scilense2-ssh gateway machine:
[someone@somedesktop ~]$ ssh -A -i $HOME/.ssh/id_rsa_scilens scilens2-ssh.da.cwi.nl
and subsequently move to the specific machine you desire to use, e.g.
[someone@scilens2-ssh ~]$ ssh bricks09
Make sure you can ssh into the cluster via the gateway machine. On the gateway machine a key has been created for you, probably named: .ssh/id_rsa_scilens_<username>. This key should be copied to the desktop .ssh directory.
Add this information to the config file $HOME/.ssh/config on your CWI desktop machine:
Host scilens2-ssh.da.cwi.nl User your-username IdentityFile=/home/your-username/.ssh/id_rsa_scilens ForwardAgent=yes Host bricks* rocks* pebbles2* diamonds* stones* gems* IdentityFile=/home/your-username/.ssh/id_rsa_scilens_<username> ProxyCommand ssh scilens2-ssh.da.cwi.nl -W %h:%p
You will now be able to issue an SSH command directly:
[someone@somedesktop ~]$ ssh bricks09
and the connection will be automatically proxied over an SSH connection to the gateway machine.
Notes:
- Remember to make $HOME/.ssh/config readable only by you if you've just created it.
- Add the keys to your key chain using ssh-add .
- This might conflict with other hostnames beginning with bricks, rocks or pebbles2.
Disk usage on the cluster
When you login to one of the SciLens machines, you land in your home directory on that machine. But - try to avoid using this directory: It is mounted on a small disk, shared by all users, which is deleted when the operating system is reinstalled. Restrict yourself in using it to configuration settings files, symlinks etc.
For most practical cases, you should make yourself a directory in /scratch or /data where there should be ample of disk space.
Copying data in and out of the cluster
As explained above, all communication into into the cluster is mediated by ssh through the scliens-ssh virtual machine, and this is specifically true when you want to send files in. Fortunately, if you've configured [#Automation of proxying|automatic proxying], all SSH-related utilities will work. You can thus push and pull files from the outside:
[someone@somedesktop ~]$ scp foo bricks09:/scratch/someone/bar [someone@somedesktop ~]$ scp bricks09:/scratch/someone/bar foo
What about pushing and pulling from the inside? Well, it seems [?] you're limited to using key files for authentication going out of the cluster to our desktop machines, so you should generate yourself an (RSA) key pair on the cluster machine, e.g.
[someone@bricks09 ]$ ssh-keygen -t rsa
and then you can do
[someone@bricks09 /scratch/someone]$ scp foo somedesktop.ins.cwi.nl:bar [someone@bricks09 /scratch/someone]$ scp somedesktop.ins.cwi.nl:bar foo
as well.
Note: if you don't provide scp with a path for the local file, it will use the current work directory ($PWD) is used; but if you don't specify a path on the remote machine, and just write machine:filename, you'll be referring to your home directory on that machine).
Copying & synchronizing data within the cluster
Copying files within the cluster is straightforward using the scp command identifying the machine and directory locations involved, e.g. while on bricks09 you can move the file data to bricks10
scp data bricks10:/scratch/your_username
You can also use the rsync command to clone your environment easily on multiple machines. It requires a .rsync file in your homedirectory with directives. Create a rsync daemon configurationfile in your /scratch/your_username directory containing:
port = 2873 use chroot = no
[scratch] path = /scratch/your_username
The port number is free to choose above 1000, otherwise conflicts with others users might occur. Let's assume again you are on brick09 and you want a synchronized copy on bricks10. On bricks09 create the above configuration file and start the rsync daemon with the command:
$ rsync --daemon --config=/scratch/your_username/rsyncd.conf
Then login onto bricks10 in your /scratch/your_username directory execute the command:
$ rsync -aH bricks09:/scratch/your_username/* /scratch/your_username/
For further details see the rsync manual or contact the expert. After the rsync is complete you should terminate the rsync daemon on bricks09.
Returning small files from the SciLens machine to your desktop can be performed using scp. For big ones contact the expert. Importing large amounts of data into the cluster, contact the expert.
Printing
Files to be printed should be sent to the CWI print spooler and a specific printer, e.g.
lpr -H spool.cwi.nl -Ppear <files>
Accessing the internet
You can use wget to download any page or file available on the WWW from within the cluster.
You have to contact the expert if you want to run a web-client/server setup on the machine, as this will require a custom SSH tunneling setup.
Root-specific features
Some tools have been put in place to make your life easier with respect to performance analysis, which require root privileges to use. These have to be called with the sudo command, e.g.:
$ sudo iotop
Pre-installed user requested packages
External software that requires root permissions can only be installed from source in your local environment. For example, postgresql, mysql and friends.
Non-installed libraries available within the Fedora distributions can be enabled by contacting the expert.
Specific application frameworks, e.g. java, can be installed from the Fedora repository, but due to versioning issues we advice to use a local copy as much as possible. When in doubt follow the expert route.
Work with multiple machines
With the command clush, one can execute the same command on multiple SciLens machines simultaneously. Assume you are logged in scilens-ssh, then run:
$ clush -w bricks[01-16]
This will give you a prompt. Any command you type in here, e.g., 'df -h /scratch', will be run on the machines bricks01 through bricks16. Try it out to see the results produced by the selected bricks machines.
The -w option allows you to pass a list of machines you want to work on.
In theory, you can address all rocks machines with one clush command: clush -w rocks[001-144]. However, in practice, it is advisable to work with smaller groups of machines, say, 20. This also makes it easier to cancel a command if one of the machines freezes.
With quit you can stop clush. For more information, please see the man page of clush.
SciLens cluster hardware
- Standard hardware
- Non-standard hardware
- diamonds
- gems
- stones
- bricks
- rocks
- pebbles2
- (pebbles)
SciLens cluster use
We use below pages to register usage claims of SciLens cluster machines.
There is one page per month per machine class / tier.
When a new month starts, we create a new set of pages for this month, one per machine class / tier, from the respective template pages.
We can use the template pages to claim machines for usages that last more than a month.
Please do always claim machines before you start using them!
Please recall to release machines (also from the templates!) once you're done using them!
(In case you cannot access this wiki yourself, please ask your "host" or any DA member for help!)
- November 2016: diamonds, stones, bricks, rocks
- October 2016: diamonds, stones, bricks, rocks
- September 2016: diamonds, stones, bricks, rocks
- August 2016: diamonds, stones, bricks, rocks
- July 2016: diamonds, stones, bricks, rocks
- Jun. 2016: diamonds, stones, bricks, rocks, pebbles
- May 2016: diamonds, stones, bricks, rocks, pebbles
- Apr. 2016: diamonds, stones, bricks, rocks, pebbles
- Mar. 2016: diamonds, stones, bricks, rocks, pebbles
- Feb. 2016: diamonds, stones, bricks, rocks, pebbles
- Jan. 2016: diamonds, stones, bricks, rocks, pebbles
- 2015
- December 2015: diamonds, stones, bricks, rocks, pebbles
- November 2015: diamonds, stones, bricks, rocks, pebbles
- October 2015: diamonds, stones, bricks, rocks, pebbles
- September 2015: diamonds, stones, bricks, rocks, pebbles
- August 2015: diamonds, stones, bricks, rocks, pebbles
- July 2015: diamonds, stones, bricks, rocks, pebbles
- June 2015: diamonds, stones, bricks, rocks, pebbles
- May 2015: diamonds, stones, bricks, rocks, pebbles
- Apr. 2015: diamonds, stones, bricks, rocks, pebbles
- Mar. 2015: diamonds, stones, bricks, rocks, pebbles
- Feb. 2015: diamonds, stones, bricks, rocks, pebbles
- Jan. 2015: diamonds, stones, bricks, rocks, pebbles
- 2014
- Dec. 2014: diamonds, stones, bricks, rocks, pebbles
- Nov. 2014: diamonds, stones, bricks, rocks, pebbles
- Oct. 2014: diamonds, stones, bricks, rocks, pebbles
- Sep. 2014: diamonds, stones, bricks, rocks, pebbles
- Aug. 2014: diamonds, stones, bricks, rocks, pebbles
- Jul. 2014: diamonds, stones, bricks, rocks, pebbles
- Jun. 2014: diamonds, stones, bricks, rocks, pebbles
- May. 2014: stones, bricks, rocks, pebbles
- Apr. 2014: stones, bricks, rocks, pebbles
- Mar. 2014: stones, bricks, rocks, pebbles
- Feb. 2014: bricks, rocks, pebbles
- Jan. 2014: bricks, rocks, pebbles
- 2013