Minor DJGPP fixes

Nate Eldredge neldredge@hmc.edu
Mon May 1 02:21:00 GMT 2000


Eli Zaretskii writes:
 > > - If it doesn't work, you need to open the file in blocking mode, not
 > >   just elide the fcntl(); otherwise the code reading the file will
 > >   break. 
 > 
 > Could you please explain what does ``open in blocking mode'' mean?
 > Isn't a normal default `open' good enough?  If not, why not?

Suppose you try to read from a file, but for some reason there is no
input available.  (Perhaps it's a hardware device, or a network
connection, or a pipe, or something.)  There are two obvious things
the system could do.

- It can wait (block) until input becomes available (normally
  scheduling other processes in the meantime).  This is what happens
  when the file is in blocking mode.

- It can return an error to indicate there is no input now, and you
  should try again later.  This happens in non-blocking mode.

So the blocking/non-blocking mode on the file is how you select which
behavior you want.

An ordinary `open' would use blocking mode.  However, in the case of
devices, the open itself can block if some initialization can't be
done immediately, causing the problems explained below.

 > > This fcntl() is there to avoid a problem that never happens in real
 > > life: someone does #include </dev/rmt0> and the preprocessor gets
 > > wedged because there's no tape.
 > 
 > Shouldn't people who do this get what they were asking for?  I mean,
 > if someone *really* wanted the preprocessor to read the tape,
 > shouldn't the preprocessor get stuck if there's no tape?

Perhaps.  The more obvious behavior would be to complain that
/dev/rmt0 has some problem, and abort.

 > > Therefore, I'd be willing to dump the fcntl() call entirely and open
 > > the file in blocking mode on all hosts.  Does anyone else have an
 > > opinion?
 > 
 > I need to understand what a ``blocking mode'' is to form a useful
 > opinion.  I assume that the same code will be used to read normal
 > files as well.

Yes, blocking/non-blocking has no effect when dealing with regular
files.

Incidentally, devices have some interesting ramifications for things
like public-access compilers.  #include </dev/zero> will usually make
the compiler or preprocessor slowly eat up all memory, and then die.
This can make for a nice denial of service, especially if the system
will kill other processes when memory gets low (Linux can).  So it's
good to see that this is being disabled for the future.

(I'm still trying to figure out how to exploit things like 
#include </etc/passwd> to steal system info...)

-- 

Nate Eldredge
neldredge@hmc.edu


More information about the Gcc-patches mailing list