Named Pipes to Turn CLI Programs Into Python Functions
Tl;dr
A named pipe is like a file that doesn't store anything. It has a path (the name I guess). It can be opened, read from, and written to, but the content is temporary and stored in memory.
Since a named pipe is also a pipe it acts a bit different than a normal file. When you open it you can only open it as read-only or write-only not read-write. The idea is you'd have one process with it open for write (the producer) and one process with it open for read (the consumer). This matches a pipe since they have a read-end and a write-end. At least in Unix.
A call to open for read or write will block until something else opens it for the other. Calls to read or write will also block if the named pipe is empty or full respectively.
Named pipes can be useful if you need to create CLI pipelines with programs that consume or produce multiple inputs and outputs. You have to be careful not to create deadlocks due to the blocking behaviour though. You also can't use this with a program that needs to seek in a file or that reads and writes the same file.
I'm using them to run file based CLI programs from Python without writing to the disk.
What I Was Trying To Do
I'm using two command line programs rgbasm and rgblink. They're part of the RGBDS GameBoy assembler toolchain. If I were running them from the command line I would use the following two commands:
rgbasm A.asm -o A.o
rgblink A.o B.o -o A.gb -m A.map -n A.sym
I want my Python script to be able to provide A.asm and get back the contents of A.gb, A.map, and A.sym.
So I can just have my script write A.asm, use subprocess.run() to run the two commands then read in the three files!
Unnecessary Constraints!
I didn't want to write anything to disk.
Why?
My justification was that I want to run this in a server and having it writing to and reading from the disk felt wrong. That's a bad reason though. This is for a personal toy project. Making scalable, production quality software is not the goal! I'm not saying I achieved scalable, production quality software though, just that it's not the goal.
The actual reason was that I thought it would be possible to do and I wanted to figure out how. That's a good reason! At least for a personal toy project. Even if it is a bit of a detour.
What Did I Know Going In
- Linux stuff
- pipes exist
- you can use them to route stdin/out/err from one program to another programs
- files exist
- everything is a file: directories, devices, programs, you, the internet, the computer, your home, the love you feel for others, this list entry
- Python
- normal file stuff
- how to use the subprocess module to run programs
- I can yell
PIPEto communicate with subprocesses over stdin/out/err - some thread stuff
Named Pipes / FIFOs
I quickly came across named pipes and how to create them in Python with os.mkfifo or from the command line with mkfifo (I don't know about windows though). After being briefly distracted by going overboard with context managers to create and delete a temp directory and a bunch of FIFOs I ran into my first gotcha. My program froze when I tried to open a FIFO.
The problem was that I was only opening one end of the FIFO/pipe and that will block until the other end is opened.
This was lucky! Confusing, but lucky! You see I didn't know what I was doing and I very well could have accidentally written my code in a way that missed this problem. Then I never would have learned that opening a FIFO/named pipe will block unless the other end of also open. I also wouldn't have realized that I knew less what I was doing than usual and I'd need to pay more attention.
You can use os.open() to open without blocking and that's what I did at first. Something like this (rough memory probably not exactly what I did):
def opener(path, flags):
return os.open(path, flags | os.O_NONBLOCK)
with open('fifo_in', 'w', opener=opener) as wf, \
open('fifo_out', 'rb', opener=opener) as rf:
wf.write(input_data)
p = subprocess.Popen(['rgbasm', 'fifo_in', '-o', 'fifo_middle'])
subprocess.run(['rgblink', 'fifo_middle', '-o', 'fifo_out'])
p.wait()
output = rf.read()
I was surprised when it worked. But I'm pretty sure this can deadlock in a few ways though. Here's what's happening:
- There are three named pipes:
fifo_infifo_middlefifo_out
fifo_inandfifo_outare used to get data intorgbasmandrgblinkrespectivelyfifo_middleis used to get data fromrgbasmintorgblink.fifo_inandfifo_outare opened by the Python code using a specially defined opener function.fifo_outis opened as binary because it contains binary data.- Some data is written to
fifo_in. This is the input torgbasm. rgbasmis run usingsubprocess.Popen()so Python doesn't block waiting for it to complete.rgbasmitself will be blocked when it tries to openfifo_middlebecause (presumably) it's trying to open one end of a named pipe with a normal open without theO_NONBLOCKflag. Sorgbasmwon't finish until something opensfifo_middlefor reading.
rgblinkis run usingsubprocess.run(). Python will block (wait) until it completes before moving on.rgblinkwill openfifo_middlefor reading andfifo_outfor writing. Neither of these blocks becausefifo_middleis already opened for writing byrgbasmandfifo_outis already open for reading by the python code.- This unblocks
rgbasmand it can start producing output and everything move along.
- I added a
p.wait()just to make surergbasmwas done. - Finally I read the output!
So What's the Problem?
For one thing, it only works because the input isn't very big! Pipes/named-pipes/FIFOs have a maximum capacity. If you try to write to one when it's full then it will block or fail if O_NONBLOCK is set. Reading from one that's empty will have similar results. There's more info in the man(7) page in the I/O on pipes and FIFOs, Pipe capacity, and PIPE_BUF sections. So if the input data were bigger then the python script would crash on wf.write(input_data).
Similarly, if the output were too big then rgblink would block when it tries to write to a full pipe. This means the python script would be blocked from reading anything from the pipe so everything will just be stuck!
Maybe Don't Use O_NONBLOCK?
Ok, I could change it to something like:
p_rgbasm = subprocess.Popen(['rgbasm', 'fifo_in', '-o', 'fifo_middle'])
p_rgblink = subprocess.Popen(['rgblink', 'fifo_middle', '-o', 'fifo_out'])
with open('fifo_in', 'w') as wf, \
open('fifo_out', 'rb') as rf:
wf.write(input_data)
p_rgblink.wait()
output = rf.read()
Now we're not using O_NONBLOCK so at least reads and writes won't fail in the Python code. I even think this might work for this case! Here's what I think is happening:
rgbasmandrgblinkare started and running in the background. They're both definitely blocked though since they need to openfifo_inandfifo_outrespectively, and their other ends aren't open yet.- I open the other ends of
fifo_inandfifo_outsorgbasmrgblinkshould be unblocked.- They'll both still get blocked when they try to read from empty pipes (
rgbasmfromfifo_inandrgblinkfromfifo_middle).
- They'll both still get blocked when they try to read from empty pipes (
- I write to
fifo_in. Nowrgbasmis unblocked and can read fromfifo_in. It should also be writing tofifo_middlesorgblinkisn't blocked and it can read and start producing output and writing tofifo_out. - I wait for
rgblinkto be done. - I read the output from
fifo_out. This will unblockrgblinkiffifo_outgot full... - Oh shoot, I never get to
rf.read()whenfifo_outis full because I'm still waiting onp_rgblink.wait().
I could probably fix this by getting rid of p_rgblink.wait() but I don't actually know if rf.read() will always return the entire output. I think it will as long as rgblink doesn't do something strange like close the file and reopen it to write some more. The reason I think this is because read() will read until it hits an end-of-file. I think that only happens when all the write-ends of the pipe are closed.
Anyway we have other problems.
Other Problems
The Python script could have blocked on open('fifo_in', 'w') because we have no guarantee about how rgbasm and rgblink open files. For example if:
rgbasm opens them in the order
fifo_middlefifo_in
and rgblink opens its files in the order
fifo_outfifo_middle
So now we're in a deadlock!
- The Python script is blocked waiting for
rgbasmto openfifo_in. rgbasmwon't openfifo_insince it's blocked waiting forrgblinkto openfifo_middle.rgblinkwon't openfifo_middlesince it's blocked waiting for the Python script to openfifo_out.- The Python script won't open
fifo_outbecause of point 1!
Grumble
Maybe there's something in asyncio that could help me?
Ok, no, that's more complicated than I thought it would be and I don't see a magic wand for this anyway.
Fine I'll Use Threads
I already have deadlocks. I can't make it much worse right?
import threading
import subprocess
import io
def doRGBStuff(input_data):
def do_write(file_name, data):
with open(file_name', 'w') as wf:
wf.write(data)
def do_read(file_name, out_stream):
with open(file_name, 'rb') as rf:
out_stream.write(rf.read())
write_thread = threading.Thread(target=do_write, args=('fifo_in', input_data))
write_thread.start()
p_rgbasm = subprocess.Popen(['rgbasm', 'fifo_in', '-o', 'fifo_middle'])
p_rgblink = subprocess.Popen(['rgblink', 'fifo_middle', '-o', 'fifo_out'])
out_stream = io.BytesIO()
read_thread = threading.Thread(target=do_read, args=('fifo_out', out_stream))
read_thread.start()
read_thread.join()
return out_stream.getvalue()
Now we have write_thread to write data into fifo_in and read_thread to read from fifo_out and put it into an io.BytesIO stream. Everything that interacts with the fifos is either a thread or subprocess so when they block it won't block the main Python script. The script blocks at the end with read_thread.join(). This should wait until rgblink finishes writing its output. It still won't work properly if rgblink closes and reopens the output file in the middle.
I can't think of anything that would let this deadlock, but maybe I'm missing something.
To complete the original task of getting three outputs I'll just need two more fifos and reading threads.
Also I haven't tested any of the specific code in this post! It's based on some other code that DOES work though!