jellyfin

Jellyfin is a free software media system that lets you build your own home media system (you can think of it as your own personal Netflix!). You can read about it more here at https://Jellyfin.org

But I found it struggling to set it up on CentOS 7, since there are no official instructions for CentOS.

Here are the steps I took:

1. Install FFmpeg

Run this commands in the CentOS terminal:

sudo yum install epel-release
sudo rpm -v --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro
sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
sudo yum install ffmpeg ffmpeg-devel

2. Install Jellyfin

Find out that what is the name of the latest stable version here!

At the time of this post, the latest build is jellyfin-10.5.3-1.el7.x86_64.rpm. So download it:

wget https://repo.jellyfin.org/releases/server/centos/jellyfin-10.5.3-1.el7.x86_64.rpm

and install Jellyfin:

sudo yum localinstall jellyfin-10.5.3-1.el7.x86_64.rpm

Enable and start Jellyfin on every reboot:

sudo systemctl enable jellyfin
sudo systemctl start jellyfin

Check and see if it is running:

sudo systemctl status jellyfin

Now, you need to add the original Jellyfin access port (8096). This will open the access for public (any IP):

firewall-cmd --zone=public --add-port=8096/tcp --permanent
firewall-cmd --reload

You can also limit the IPs who can have access to your Jellyfin website.

Now you can access your Jellyfin website at:

http://YOU_SERVER_IP:8096

Now you should be able to go through the setup wizard for Jellyfin!

Disk quota exceeded

If the user reaches the quota limit, the filesystem that carries home directory of users on High Performance Computing (HPC) servers (called ZFS) does not let users delete files. So the following command would not work:

rm my_file.dat
rm: cannot remove file `my_file.dat': Disk quota exceeded

The reason is that the system needs to transiently write metadata to the system before performing the deletion process. The solution is the following commands:

# Copy a null file, and replace with the file you want to delete
cp /dev/null my_file.dat

# remove the file that is already changed
rm -r my_file.dat

The first line overwrite a null file on the file you would like to delete. The second line deletes that file. Now you should have enough quota to perform more deletions at your user directory.

Working with High Performance Computing (HPC) servers, I always need to submit sequential jobs or jobs with dependency. So, the job you submit stays on queue and waits for the running job to finish. I use the following command:

qsub -W depend=afterany:JOB_ID qsub_file

for example, if the running job id is 4321234, and you would like to submit myrun.qsub, the command will look like:

qsub -W depend=afterany:4321234 myrun.qsub

But sometimes you have tens of jobs you want to submit all at once. Here is a bash code you can use to submit a chain of jobs:

// File name : run_100jobs_for_me.sh
// the first job you submit
job='qsub myrun1.qsub'

// submission of the jobs 2 through 100!
for i in {2..100}
do
    job_next='qsub -W depend=afterany:$job myrun$i.qsub'
    job=$job_next
done

You can run the code above using the following command in Linux command line:

sh run_100jobs_for_me.sh

and one last point! You can set the condition in which the next job will run, simply by changing afterany in the commands above. These are the options:

// Job is scheduled if the Job JOB_ID exits without errors or is successfully completed.
afterok:JOB_ID

// Job is scheduled if the Job JOB_ID exited with errors.
afternotok:JOB_ID

// Job is scheduled if the Job JOB_ID exits with or without error
afterany:JOB_ID