Read this article.
| Command | Description |
|---|---|
|
Download the installer. |
|
Run the .sh bash installer. |
| ENTER | To continue with the installer |
| Keep reading the agreement to the end | |
|
To agree |
|
Select an installation folder. This is just an example. You may use something that you see fit. |
|
Do not run the init.
Ideally, you should be able to. However, our multiple NFS (network file system) makes things very complicated. |
|
Here, we make sure we run the init from our new installation. This change is only for your user account This change is only for your ~/.bashrc |
|
This change is only for your user account. |
| Logout and login to the SSH shell again | |
|
Once this is run, you should see (base) on your SSH prompt. |
|
You can use these commands to verify if you are using the correct conda/python installations. |
Source: Google Doc
Method 1: Please contact the admin and request them run update-alternatives as documented here .
Method 2: All gcc/g++ versions are installed in /usr/bin/ . Following is an example on how you can use gcc-8 as gcc using a simple trick (without sudo).
gcc -v#You will see what the current gcc version is mkdir ~/symlinks cd ~/symlinks ln -s /usr/bin/gcc-8 ./gcc #We are creating a
symbolic link called gcc to gcc-8 export PATH="~/symlinks:$PATH" #We add the new gcc to the path (before the existing path) cd ~ gcc
-v #Now you get the gcc-8 when you run gcc command
You can check the current GPU utilization of our servers using the GPU Usage Meter .
First, list the available GPUs and their IDs by running:
nvidia-smi -LYou will see output like:
The numbers (0, 1, 2) are the GPU IDs. Use the CUDA_VISIBLE_DEVICES environment variable to limit which GPUs your program can use.
To use only GPU 0:
export CUDA_VISIBLE_DEVICES=0To use GPU 0 and GPU 2:
export CUDA_VISIBLE_DEVICES=0,2Then run your code as usual:
python trainer.pyYou can verify which GPUs are visible by running:
echo $CUDA_VISIBLE_DEVICESThis is useful when sharing a server with others, so your program does not occupy all GPUs.
Individual students have their storage as babbage.ce.pdn.ac.lk:/home/e14000 [Not really, but don't worry]. This storage is mounted to the same location to every other server through network interfaces. However, you can request additional storage for your projects on individual servers. This storage is usually faster than babbage.
Make a post on #ask-for-help on #PeraCOM Discord with the following information.
We will create a unix group and a folder with chmod 770 permission on the server. We will update this information on the "Server groups, folders, and datasets" Google sheet on this web page .
Please note that every folder comes with an expiry date. Check the date on the Google sheet and make sure that your endorsing academic staff member sends a time extension request (e.g. "extend the expiration date of kepler:/e14-4yp-explainable-ml by 6 months") to webmaster.github.ce@eng.pdn.ac.lk when it is close to the expiry date.
Note: