util  Diff

Differences From Artifact [1814ca81a5]:

To Artifact [d69c7e1283]:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81

82
83
84

85
86
87
88

89
90
91
92

93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
/* [ʞ] bgrd.c
 *  ~ lexi hale <lexi@hale.su>
 *  $ cc -Ofast bgrd.c -lutil -obgrd
 *  © affero general public license 3.0

 * i am angry beyond words that this program had to
 * be written
 *
 * bgrd "background read" is a tool for launching an 
 * application and retrieving the first line of output.
 * this is a nontrivial task for a number of reasons,
 * all of which are incredibly stupid, however, most
 * saliently: buffered io.
 *
 * the suckmore web browser `surf` has an option '-x'
 * which is intended to print the X window id of the
 * window it opens, so it can be captured and used in
 * scripts. so far so good.
 *
 * except unix has two distinctly different concepts
 * of IO. there's POSIX IO, and then there's libc IO.
 *
 * POSIX IO uses the shit in <fcntl.h> and <unistd.h>;
 * syscalls like read(2), write(2), and pipe(2) - the
 * good, simple shit God made unix for. this is really
 * bare-metal; these are basically C wrappers over
 * kernel syscalls. POSIX IO uses plain old ints as
 * file descriptors, and it doesn't fuck around. when
 * you say "write," god dammit, it WRITES.
 *
 * libc is a very different beast. libc has opinions.
 * libc has abstractions. libc has its own entire
 * goddamn DSL by which to specify format strings,
 * because apparently someone felt called to reinvent
 * FORTRAN except worse. printf(), you know, the first
 * function they ever teach you in C 101? (more like CS
 * 403 these days, but aghhh) it's actually a heinously
 * complicated, dangerous, slow, insecure mess that drags
 * a ridiculous amount of background bullshit into the
 * frame. such as BUFFERING.
 *
 * libc, you see, is too good to just wrap write() and
 * read(). no, it also has to decide when to SEND them.
 * this is responsible for that behavior every c coder
 * trips over eventually - you know the one, where if
 * you don't end your format string in '\n' it isn't
 * printed, sometimes even if the program exits. this
 * is not actually a POSIX thing; it's a libc thing.
 *
 * libc has a couple different kinds of buffering tactics
 * (set with setvbuf(), a function nobody seems to know
 * exists) that it uses in different circumstances.
 * the printf \n gotcha behavior is what's known as
 * "line buffering." however, because libc thinks it's
 * fucking smart or something, it's not content to just
 * pick one predictable behavior and stick to it. oh no.
 *
 * ever noticed how programs can seem to tell whether
 * they're connected to a terminal (and thus can output
 * all those fancy ansi formatting codes), or whether
 * you're redirecting their stdout to a file? that's 
 * because there's more than one kind of pipe. the kind
 * you create with pipe(2) - and the kind you create with
 * openpty(2).
 *
 * a pty is, essentially, a kind of pipe that carries
 * extra information around, the information you access
 * via ioctl and termios. it's designed to imitate a TTY,
 * so that a shell can create one, aim a process at it,
 * and then that process can draw TUIs and shit on it.
 * and some programs are designed to behave differently
 * depending on whether they're hooked up to a pipe or a
 * pty.
 *
 * libc, tragically, is among them.
 *
 * if libc notices it's hooked up to a pipe instead of a pty,
 * it will change its default buffering strategy. newlines
 * will suddenly cease flushing stdout, and libc will
 * only print its buffer in one of two circumstances: the
 * buffer is filled up, or the program exits.

 *
 * this is a problem if you are, say, trying to output a
 * handle that scripts can use to control the running

 * program.
 *
 * the `surf` developers had a couple of options. they
 * could have simply broken out the POSIX headers and

 * sent the X id to stdout with a call to write(2), the
 * correct thing to do. they could have thrown in a call
 * to setvbuf(3) to explicitly pick a buffering strategy
 * compatible with their usecase, the sensibly wrong

 * thing to do. they could have explicitly flushed stdout
 * after printf(3)'ing to it, the dumb and error-pront
 * thing to do.
 *
 * instead, they did *nothing.*
 *
 * so if you run `surf -x` from a terminal, great!
 * you'll see it print the x window id first thing.
 * you'll then try to capture it via any number of
 * increasing desperate means, all of which will fail
 * hilariously. finally, you'll spend four goddamn hours
 * after midnight reading source code and manpages and
 * frantically googling around and digging up unix lore
 * and finally, FINALLY figure out what the batshit FUCK
 * is going on, and write this goddamn utility to hack
 * around the suckmore crowd's suckitude.
 *
 * so i figured i'd save you some time.
 *
 * i am probably going to submit a PR eventually because
 * holy hell this is just so monumentally bozotic.
 *
 * in the mean time, you can use this extremely
 * no-bullshit wrapper by running `set surfwin (bgrd
 * (which surf) surf -x <params>` or whatever the bash
 * equivalent is and it will immediately launch surf in
 * the background, printing the X window and exiting
 * as soon as it hits a newline. it should be adaptable
 * to similar scenarios if you find yourself dealing with
 * similarly broken software tho.
 *
 * in conclusion, read lenin. */

#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>





|
|

|
|
|
|


|
|
|


|
|

|
|
|
|
|
|
|

|
|
|
|
|
|
|

|


|
|
|
|
|
|
|


|
|
|
|
|
|

|
|
|
|
|



|
|

|
|
|
|




|
|
|
|
|
>

|
<
>


<
|
>
|
|
|
<
>

|




|
|
|
|
|
|
|
|
|




|


|
|
|
|
|
|
|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84

85
86
87

88
89
90
91
92

93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
/* [ʞ] bgrd.c
 *  ~ lexi hale <lexi@hale.su>
 *  $ cc -Ofast bgrd.c -lutil -obgrd
 *  © affero general public license 3.0

 * i am  angry beyond words  that this program had  to be
 * written
 *
 * bgrd  "background read"  is  a tool  for launching  an
 * application and  retrieving the first line  of output.
 * this is  a nontrivial  task for  a number  of reasons,
 * all  of which  are  incredibly  stupid, however,  most
 * saliently: buffered io.
 *
 * the  suckmore web  browser `surf`  has an  option '-x'
 * which  is intended  to print  the X  window id  of the
 * window it  opens, so  it can be  captured and  used in
 * scripts. so far so good.
 *
 * except unix  has two distinctly different  concepts of
 * IO. there's POSIX IO, and then there's libc IO.
 *
 * POSIX IO  uses the  shit in <fcntl.h>  and <unistd.h>;
 * syscalls  like read(2),  write(2), and  pipe(2) -  the
 * good, simple  shit God made  unix for. this  is really
 * bare-metal; these are basically C wrappers over kernel
 * syscalls.  POSIX  IO  uses  plain  old  ints  as  file
 * descriptors, and it doesn't  fuck around. when you say
 * "write," god dammit, it WRITES.
 *
 * libc  is a  very different  beast. libc  has opinions.
 * libc has abstractions. libc has its own entire goddamn
 * DSL  by  which  to  specify  format  strings,  because
 * apparently  someone felt  called  to reinvent  FORTRAN
 * except worse.  printf(), you know, the  first function
 * they  ever teach  you  in  C 101?  (more  like CS  403
 * these  days,  but  aghhh) it's  actually  a  heinously
 * complicated, dangerous, slow, insecure mess that drags
 * a ridiculous  amount of  background bullshit  into the
 * frame. such as BUFFERING.
 *
 * libc, you  see, is too  good to just wrap  write() and
 * read(). no, it  also has to decide when  to SEND them.
 * this is  responsible for  that behavior every  c coder
 * trips over eventually - you know the one, where if you
 * don't end your format string in '\n' it isn't printed,
 * sometimes  even  if the  program  exits.  this is  not
 * actually a POSIX thing; it's a libc thing.
 *
 * libc has a couple different kinds of buffering tactics
 * (set with  setvbuf(), a function nobody  seems to know
 * exists) that  it uses in different  circumstances. the
 * printf  \n gotcha  behavior is  what's known  as "line
 * buffering." however, because  libc thinks it's fucking
 * smart or something, it's not  content to just pick one
 * predictable behavior and stick to it. oh no.
 *
 * ever  noticed how  programs can  seem to  tell whether
 * they're connected  to a terminal (and  thus can output
 * all  those fancy  ansi formatting  codes), or  whether
 * you're  redirecting their  stdout  to  a file?  that's
 * because there's more  than one kind of  pipe. the kind
 * you create with pipe(2) - and the kind you create with
 * openpty(2).
 *
 * a pty  is, essentially,  a kind  of pipe  that carries
 * extra information  around, the information  you access
 * via ioctl and termios. it's designed to imitate a TTY,
 * so that a  shell can create one, aim a  process at it,
 * and then  that process can  draw TUIs and shit  on it.
 * and some  programs are designed to  behave differently
 * depending on whether they're hooked  up to a pipe or a
 * pty.
 *
 * libc, tragically, is among them.
 *
 * if libc notices it's hooked up  to a pipe instead of a
 * pty, it  will change  its default  buffering strategy.
 * newlines  will  suddenly  cease flushing  stdout,  and
 * libc  will  only  print  its  buffer  in  one  of  two
 * circumstances: the buffer is filled up, or the program
 * exits.
 *
 * this is a problem if you  are, say, trying to output a

 * handle  that scripts  can use  to control  the running
 * program.
 *

 * the `surf`  developers had  a couple of  options. they
 * could  have simply  broken out  the POSIX  headers and
 * sent the X  id to stdout with a call  to write(2), the
 * correct thing to do. they  could have thrown in a call
 * to setvbuf(3) to explicitly  pick a buffering strategy

 * compatible  with  their  usecase, the  sensibly  wrong
 * thing to do. they could have explicitly flushed stdout
 * after printf(3)'ing  to it,  the dumb  and error-pront
 * thing to do.
 *
 * instead, they did *nothing.*
 *
 * so  if  you run  `surf  -x`  from a  terminal,  great!
 * you'll  see it  print  the x  window  id first  thing.
 * you'll  then  try to  capture  it  via any  number  of
 * increasing  desperate means,  all of  which will  fail
 * hilariously. finally, you'll  spend four goddamn hours
 * after midnight  reading source  code and  manpages and
 * frantically googling  around and digging up  unix lore
 * and finally, FINALLY figure  out what the batshit FUCK
 * is going  on, and write  this goddamn utility  to hack
 * around the suckmore crowd's suckitude.
 *
 * so i figured i'd save you some time.
 *
 * i am probably going to  submit a PR eventually because
 * holy hell this is just so monumentally bozotic.
 *
 * in  the   mean  time,  you  can   use  this  extremely
 * no-bullshit  wrapper  by  running `set  surfwin  (bgrd
 * (which surf)  surf -x  <params>` or whatever  the bash
 * equivalent is  and it will immediately  launch surf in
 * the background,  printing the X window  and exiting as
 * soon as it  hits a newline. it should  be adaptable to
 * similar scenarios  if you  find yourself  dealing with
 * similarly broken software tho.
 *
 * in conclusion, read lenin. */

#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>