Jellyfin is a free software media system that lets you build your own home media system (you can think of it as your own personal Netflix!). You can read about it more here at

But I found it struggling to set it up on CentOS 7, since there are no official instructions for CentOS.

Here are the steps I took:

1. Install FFmpeg

Run this commands in the CentOS terminal:

sudo yum install epel-release
sudo rpm -v --import
sudo rpm -Uvh
sudo yum install ffmpeg ffmpeg-devel

2. Install Jellyfin

Find out that what is the name of the latest stable version here!

At the time of this post, the latest build is jellyfin-10.5.3-1.el7.x86_64.rpm. So download it:


and install Jellyfin:

sudo yum localinstall jellyfin-10.5.3-1.el7.x86_64.rpm

Enable and start Jellyfin on every reboot:

sudo systemctl enable jellyfin
sudo systemctl start jellyfin

Check and see if it is running:

sudo systemctl status jellyfin

Now, you need to add the original Jellyfin access port (8096). This will open the access for public (any IP):

firewall-cmd --zone=public --add-port=8096/tcp --permanent
firewall-cmd --reload

You can also limit the IPs who can have access to your Jellyfin website.

Now you can access your Jellyfin website at:


Now you should be able to go through the setup wizard for Jellyfin!

Recently, I had to deal with a set of data, some from experiments, and some obtained using MATLAB “Curve Fitting Tool”. As much as I like MATLAB figures, I usually find Excel features more desirable for good-looking plots! So, I needed to extract the data of Curve Fitting Tool. Here is what I did:

Save the figure as a “.fig” file.

Make a m-file with the following code to get the x, y data of the curve!

% Reads the figure file

% Gets the object of type line from the figure

% Obtains the "x" and "y" data of the curves

Then you have access to the data, and you can get the arrays as

% Assuming there are only 2 curves in the figure
% Gets the data as arrays

% x-values of 2 curves
x_data_curve_fit = x_data {1,1};
x_data_experimental = x_data{1,2};

% y-values of 2 curves
y_data_curve_fit = y_data {1,1};
y_data_experimental = y_data{1,2};

Then, your data is ready to copy/export to Excel.

Disk quota exceeded

If the user reaches the quota limit, the filesystem that carries home directory of users on High Performance Computing (HPC) servers (called ZFS) does not let users delete files. So the following command would not work:

rm my_file.dat
rm: cannot remove file `my_file.dat': Disk quota exceeded

The reason is that the system needs to transiently write metadata to the system before performing the deletion process. The solution is the following commands:

# Copy a null file, and replace with the file you want to delete
cp /dev/null my_file.dat

# remove the file that is already changed
rm -r my_file.dat

The first line overwrite a null file on the file you would like to delete. The second line deletes that file. Now you should have enough quota to perform more deletions at your user directory.

Wordpress/Python integration

Why does it matter? Because we might need to write several posts with relatively similar content at the same time.

First of all, I need to install a Python library to interact with the WordPress blog. We can install it easily by the following code in command prompt. Open CMD.exe in your windows machine, and run:

pip install python-wordpress-xmlrp

This command will add python-wordpress-xmlrpc library to your installed Python. This library interacts using WordPress XML-RPC API. It is very easy to work with this library. I need to first import the required classes as:

from wordpress_xmlrpc import Client
from wordpress_xmlrpc.methods import posts
from wordpress_xmlrpc import WordPressPost

Then I need to declare my WordPress blog login information:

your_blog = Client('', 'USERNAME', 'PASSWORD')

Then, I can access to the posts by the following line:

Now, I want to write a new post and publish it in my WordPress blog:

post = WordPressPost()
post.title = 'MY_POST_TITLE'
post.content = 'YOUR_POST_CONTENT' =
post.post_status = 'publish', post))

In line 2, I can choose the title of the post. In line 3, I can choose the URL of the post. For example, I can choose “how-to-use-python-to-write-a-post-in-your-wordpress-website“. In line 4, I can write my post content. New lines, new tabs, and etc. can be used according to Python syntax. In line 6, you set the status of the post as “publish”ed!

and, the post is automatically uploaded in the WordPress blog.

More information on this Python library : python-wordpress-xmlrpc

Working with High Performance Computing (HPC) servers, I always need to submit sequential jobs or jobs with dependency. So, the job you submit stays on queue and waits for the running job to finish. I use the following command:

qsub -W depend=afterany:JOB_ID qsub_file

for example, if the running job id is 4321234, and you would like to submit myrun.qsub, the command will look like:

qsub -W depend=afterany:4321234 myrun.qsub

But sometimes you have tens of jobs you want to submit all at once. Here is a bash code you can use to submit a chain of jobs:

// File name :
// the first job you submit
job='qsub myrun1.qsub'

// submission of the jobs 2 through 100!
for i in {2..100}
    job_next='qsub -W depend=afterany:$job myrun$i.qsub'

You can run the code above using the following command in Linux command line:


and one last point! You can set the condition in which the next job will run, simply by changing afterany in the commands above. These are the options:

// Job is scheduled if the Job JOB_ID exits without errors or is successfully completed.

// Job is scheduled if the Job JOB_ID exited with errors.

// Job is scheduled if the Job JOB_ID exits with or without error