Compare commits

..

55 Commits

Author SHA1 Message Date
Manuel Amador (Rudd-O)
033d86035c Tag 0.0.21. 2024-02-20 12:16:23 +00:00
Manuel Amador (Rudd-O)
d1af78c00b F39 and Qubes 4.2, no longer F37 and Qubes 4.1. 2024-02-20 12:16:11 +00:00
Manuel Amador (Rudd-O)
6014c6f190 qrun do not use pipes. 2023-08-11 22:21:19 +00:00
Manuel Amador (Rudd-O)
037e5af9bd Add Fedora 38. 2023-08-06 11:30:11 +00:00
Manuel Amador (Rudd-O)
84b7c6b0eb Fix bug in ellipsized. 2023-03-13 15:25:45 +00:00
Manuel Amador (Rudd-O)
3b1ae61238 Fix quote generator. 2023-03-13 15:14:47 +00:00
Manuel Amador (Rudd-O)
c85b35867d Nicely ellipsize logged commands. 2023-03-13 13:06:47 +00:00
Manuel Amador (Rudd-O)
782c557cb6 Update documentation to catch up with Qubes 4.1 policy changes. 2023-02-25 18:24:58 +00:00
Manuel Amador (Rudd-O)
f6dc498036 New build parameters. 2023-02-21 22:33:37 +00:00
Manuel Amador (Rudd-O)
50b3deddd2 Tag 0.0.17. 2023-02-21 22:27:41 +00:00
Manuel Amador (Rudd-O)
8675eaa547 Eliminate the lock. 2023-02-21 22:27:10 +00:00
Rudd-O
3a60f0ee4b
Merge pull request #19 from ProfessorManhattan/master
Update qubesformation.py
2022-10-24 12:40:44 +00:00
Brian Zalewski
f750054efa
Update qubesformation.py 2022-10-23 04:05:03 -04:00
Brian Zalewski
45cd87d984
Update qubesformation.py 2022-10-23 02:47:08 -04:00
Brian Zalewski
feff9f41a7
Update qubesformation.py 2022-10-23 02:40:10 -04:00
Brian Zalewski
f1db77fb05
Update qubesformation.py 2022-10-23 01:38:17 -04:00
Brian Zalewski
e5aef5be64
Update qubesformation.py 2022-10-18 12:03:20 -04:00
Brian Zalewski
77015a49ac
Update qubesformation.py 2022-10-18 11:49:48 -04:00
Brian Zalewski
9043a3a736
Update qubesformation.py 2022-10-18 11:31:17 -04:00
Brian Zalewski
8ad65b7e27
Update qubesformation.py 2022-10-18 11:18:52 -04:00
Brian Zalewski
a41aa775d2
Update qubesformation.py 2022-10-18 10:46:21 -04:00
Brian Zalewski
6984a541c6
Update qubesformation.py 2022-10-18 10:29:21 -04:00
Brian Zalewski
a6384ab40f
Update qubesformation.py 2022-10-18 05:08:21 -04:00
Brian Zalewski
bbed07547c
Update qubesformation.py 2022-10-04 00:06:42 -04:00
Brian Zalewski
6ae2ae87c0
Update qubesformation.py 2022-10-03 22:52:41 -04:00
Rudd-O
c4029694fb
Merge pull request #18 from ProfessorManhattan/master
Missing encoding for qubesformation
2022-10-01 15:01:25 +00:00
Brian Zalewski
9a592548e2
Update qubesformation.py 2022-10-01 01:05:16 -04:00
Brian Zalewski
aa712c35e0
Update qubesformation.py 2022-09-30 21:10:38 -04:00
Brian Zalewski
6eba5edf1f
Update commonlib.py 2022-09-27 16:32:15 -04:00
Brian Zalewski
e3d1084c92
Update qubes.py 2022-09-26 23:40:59 -04:00
Brian Zalewski
17303a9f92
Update bombshell-client 2022-09-26 22:09:50 -04:00
Brian Zalewski
ce843d49f7
Merge pull request #1 from ProfessorManhattan/ProfessorManhattan-patch-1
Update qubesformation.py
2022-09-18 23:32:37 -04:00
Brian Zalewski
8a850692f8
Update qubesformation.py 2022-09-18 23:30:29 -04:00
Manuel Amador (Rudd-O)
2b8f4e3a90 Tag new. 2022-09-07 02:44:25 +00:00
Manuel Amador (Rudd-O)
4966b9e814 Fix put protocol to work correctly. 2022-09-07 02:42:28 +00:00
Manuel Amador (Rudd-O)
0604255e7e Tag 0.0.15. 2022-08-21 01:37:14 +00:00
Manuel Amador (Rudd-O)
a89647c462 Fix #10 2022-08-21 01:37:05 +00:00
Manuel Amador (Rudd-O)
3f242216cc Fix #13. 2022-08-21 01:36:03 +00:00
Manuel Amador (Rudd-O)
759e37b796 Update docs to explain vmshell must be exe. 2022-08-21 01:34:23 +00:00
Manuel Amador (Rudd-O)
cd0df3cccf Tag 0.0.13. 2022-08-21 01:32:34 +00:00
Manuel Amador (Rudd-O)
9871f0aeec Merge remote-tracking branch 'origin/master' 2022-08-21 01:31:39 +00:00
Manuel Amador (Rudd-O)
6918df4f62 Fix incomplete read from the remote side. 2022-08-21 01:31:24 +00:00
Manuel Amador (Rudd-O)
7d56bc1225 Tag 0.0.12. 2022-08-18 15:37:48 +00:00
Manuel Amador (Rudd-O)
3ad3761f2f These are meant to be bytes! 2022-08-18 15:37:12 +00:00
Manuel Amador (Rudd-O)
a55d7cd4d0 Tag 0.0.11. 2022-07-12 06:29:28 +00:00
Manuel Amador (Rudd-O)
5bbbe2f791 Tag 0.0.10. 2022-07-12 06:26:55 +00:00
Manuel Amador (Rudd-O)
920805a8fd Fix encoding error. 2022-07-12 05:53:21 +00:00
Manuel Amador (Rudd-O)
f84379bb33 Fix error, 2. 2022-07-12 05:52:41 +00:00
Manuel Amador (Rudd-O)
78e00bba3a Fix error. 2022-07-12 05:52:27 +00:00
Manuel Amador (Rudd-O)
d480886f7a set proc name. 2022-07-12 05:50:12 +00:00
Manuel Amador (Rudd-O)
167a82bac8 Code reformat and quality improvement. 2022-07-12 05:47:00 +00:00
Manuel Amador (Rudd-O)
f6c623e5db F36 2022-07-11 14:38:02 +00:00
Manuel Amador (Rudd-O)
259224c7f7 Bump version. 2022-06-01 03:23:58 +00:00
Manuel Amador (Rudd-O)
b9f3eca4d9 Fix deprecated Python code. 2022-06-01 03:22:01 +00:00
Manuel Amador (Rudd-O)
03fc7da7de JQ plugin added for lookups. 2022-06-01 03:21:12 +00:00
10 changed files with 458 additions and 336 deletions

View File

@ -89,21 +89,21 @@ Enabling bombshell-client access to dom0
---------------------------------------- ----------------------------------------
`dom0` needs its `qubes.VMShell` service activated. As `root` in `dom0`, `dom0` needs its `qubes.VMShell` service activated. As `root` in `dom0`,
create a file `/etc/qubes-rpc/qubes.VMshell` with mode `0644` and make create a file `/etc/qubes-rpc/qubes.VMshell` with mode `0755` and make
sure its contents say `/bin/bash`. sure its contents say `/bin/bash`.
You will then create a file `/etc/qubes-rpc/policy/qubes.VMShell` with You will then create a file `/etc/qubes/policy.d/80-ansible-qubes.policy`
mode 0664, owned by your login user, and group `qubes`. Add a policy with mode 0664, owned by `root` and group `qubes`. Add a policy
line towards the top of the file: line towards the top of the file:
``` ```
yourvm dom0 ask qubes.VMShell * controller * allow
``` ```
Where `yourvm` represents the name of the VM you will be executing Where `controller` represents the name of the VM you will be executing
`bombshell-client` against dom0 from. `bombshell-client` against `dom0` from.
That's it -- `bombshell-client` should work against dom0 now. Of course, That's it -- `bombshell-client` should work against `dom0` now. Of course,
you can adjust the policy to have it not ask — do the security math you can adjust the policy to have it not ask — do the security math
on what that implies. on what that implies.

View File

@ -96,6 +96,8 @@ def inject_qubes(inject):
pass pass
elif vmtype == "ProxyVM": elif vmtype == "ProxyVM":
add(flags, "proxy") add(flags, "proxy")
elif vmtype == "DispVM":
pass
elif vmtype == "TemplateVM": elif vmtype == "TemplateVM":
try: try:
qubes["source"] = qubes["template"] qubes["source"] = qubes["template"]

View File

@ -8,15 +8,24 @@ from ansible.plugins.action.template import ActionModule as template
sys.path.insert(0, os.path.dirname(__file__)) sys.path.insert(0, os.path.dirname(__file__))
import commonlib import commonlib
contents = """{{ vms | to_nice_yaml }}""" contents = """{{ vms | to_nice_yaml }}"""
topcontents = "{{ saltenv }}:\n '*':\n - {{ recipename }}\n" topcontents = "{{ saltenv }}:\n '*':\n - {{ recipename }}\n"
def generate_datastructure(vms, task_vars): def generate_datastructure(vms, task_vars):
dc = collections.OrderedDict dc = collections.OrderedDict
d = dc() d = dc()
for n, data in vms.items(): for n, data in vms.items():
# This block will skip any VMs that are not in the groups defined in the 'formation_vm_groups' variable
# This allows you to deploy in multiple stages which is useful in cases
# where you want to create a template after another template is already provisioned.
if 'formation_vm_groups' in task_vars:
continueLoop = True
for group in task_vars['formation_vm_groups']:
if n in task_vars['hostvars'][n]['groups'][group]:
continueLoop = False
if continueLoop:
continue
qubes = data['qubes'] qubes = data['qubes']
d[task_vars['hostvars'][n]['inventory_hostname_short']] = dc(qvm=['vm']) d[task_vars['hostvars'][n]['inventory_hostname_short']] = dc(qvm=['vm'])
vm = d[task_vars['hostvars'][n]['inventory_hostname_short']] vm = d[task_vars['hostvars'][n]['inventory_hostname_short']]
@ -90,7 +99,6 @@ def generate_datastructure(vms, task_vars):
return d return d
class ActionModule(template): class ActionModule(template):
TRANSFERS_FILES = True TRANSFERS_FILES = True
@ -99,7 +107,7 @@ class ActionModule(template):
qubesdata = commonlib.inject_qubes(task_vars) qubesdata = commonlib.inject_qubes(task_vars)
task_vars["vms"] = generate_datastructure(qubesdata, task_vars) task_vars["vms"] = generate_datastructure(qubesdata, task_vars)
with tempfile.NamedTemporaryFile() as x: with tempfile.NamedTemporaryFile() as x:
x.write(contents) x.write(contents.encode())
x.flush() x.flush()
self._task.args['src'] = x.name self._task.args['src'] = x.name
retval = template.run(self, tmp, task_vars) retval = template.run(self, tmp, task_vars)
@ -107,7 +115,7 @@ class ActionModule(template):
return retval return retval
with tempfile.NamedTemporaryFile() as y: with tempfile.NamedTemporaryFile() as y:
y.write(topcontents) y.write(topcontents.encode())
y.flush() y.flush()
# Create new tmp path -- the other was blown away. # Create new tmp path -- the other was blown away.

View File

@ -3,7 +3,7 @@
%define mybuildnumber %{?build_number}%{?!build_number:1} %define mybuildnumber %{?build_number}%{?!build_number:1}
Name: ansible-qubes Name: ansible-qubes
Version: 0.0.8 Version: 0.0.21
Release: %{mybuildnumber}%{?dist} Release: %{mybuildnumber}%{?dist}
Summary: Inter-VM program execution for Qubes OS AppVMs and StandaloneVMs Summary: Inter-VM program execution for Qubes OS AppVMs and StandaloneVMs
BuildArch: noarch BuildArch: noarch

View File

@ -2,17 +2,19 @@
import base64 import base64
import pickle import pickle
import contextlib
import ctypes
import ctypes.util
import errno import errno
import fcntl import fcntl
import os import os
import pipes
try: try:
import queue from shlex import quote
except ImportError: except ImportError:
import Queue as queue from pipes import quote # noqa
try:
from queue import Queue
except ImportError:
from Queue import Queue # noqa
import select import select
import signal import signal
import struct import struct
@ -24,61 +26,66 @@ import time
import traceback import traceback
MAX_MUX_READ = 128*1024 # 64*1024*1024 MAX_MUX_READ = 128 * 1024 # 64*1024*1024
PACKLEN = 8 PACKLEN = 8
PACKFORMAT = "!HbIx" PACKFORMAT = "!HbIx"
@contextlib.contextmanager def set_proc_name(newname):
def mutexfile(filepath): from ctypes import cdll, byref, create_string_buffer
oldumask = os.umask(0o077)
try: if isinstance(newname, str):
f = open(filepath, "a") newname = newname.encode("utf-8")
finally: libc = cdll.LoadLibrary("libc.so.6")
os.umask(oldumask) buff = create_string_buffer(len(newname) + 1)
fcntl.lockf(f.fileno(), fcntl.LOCK_EX) buff.value = newname
yield libc.prctl(15, byref(buff), 0, 0, 0)
f.close()
def unset_cloexec(fd): def unset_cloexec(fd):
old = fcntl.fcntl(fd, fcntl.F_GETFD) old = fcntl.fcntl(fd, fcntl.F_GETFD)
fcntl.fcntl(fd, fcntl.F_SETFD, old & ~ fcntl.FD_CLOEXEC) fcntl.fcntl(fd, fcntl.F_SETFD, old & ~fcntl.FD_CLOEXEC)
def openfdforappend(fd): def openfdforappend(fd):
f = None f = None
try: try:
f = os.fdopen(fd, "ab", 0) f = os.fdopen(fd, "ab", 0)
except IOError as e: except IOError as e:
if e.errno != errno.ESPIPE: if e.errno != errno.ESPIPE:
raise raise
f = os.fdopen(fd, "wb", 0) f = os.fdopen(fd, "wb", 0)
unset_cloexec(f.fileno()) unset_cloexec(f.fileno())
return f return f
def openfdforread(fd): def openfdforread(fd):
f = os.fdopen(fd, "rb", 0) f = os.fdopen(fd, "rb", 0)
unset_cloexec(f.fileno()) unset_cloexec(f.fileno())
return f return f
debug_lock = threading.Lock() debug_lock = threading.Lock()
debug_enabled = False debug_enabled = False
_startt = time.time() _startt = time.time()
class LoggingEmu():
class LoggingEmu:
def __init__(self, prefix): def __init__(self, prefix):
self.prefix = prefix self.prefix = prefix
syslog.openlog("bombshell-client.%s" % self.prefix) syslog.openlog("bombshell-client.%s" % self.prefix)
def debug(self, *a, **kw): def debug(self, *a, **kw):
if not debug_enabled: if not debug_enabled:
return return
self._print(syslog.LOG_DEBUG, *a, **kw) self._print(syslog.LOG_DEBUG, *a, **kw)
def info(self, *a, **kw): def info(self, *a, **kw):
self._print(syslog.LOG_INFO, *a, **kw) self._print(syslog.LOG_INFO, *a, **kw)
def error(self, *a, **kw): def error(self, *a, **kw):
self._print(syslog.LOG_ERR, *a, **kw) self._print(syslog.LOG_ERR, *a, **kw)
def _print(self, prio, *a, **kw): def _print(self, prio, *a, **kw):
debug_lock.acquire() debug_lock.acquire()
global _startt global _startt
@ -88,108 +95,126 @@ class LoggingEmu():
string = a[0] string = a[0]
else: else:
string = a[0] % a[1:] string = a[0] % a[1:]
syslog.syslog(prio, ("%.3f " % deltat) + threading.currentThread().getName() + ": " + string) n = threading.current_thread().name
syslog.syslog(
prio,
("%.3f " % deltat) + n + ": " + string,
)
finally: finally:
debug_lock.release() debug_lock.release()
logging = None logging = None
def send_confirmation(chan, retval, errmsg): def send_confirmation(chan, retval, errmsg):
chan.write(struct.pack("!H", retval)) chan.write(struct.pack("!H", retval))
l = len(errmsg) ln = len(errmsg)
assert l < 1<<32 assert ln < 1 << 32
chan.write(struct.pack("!I", l)) chan.write(struct.pack("!I", ln))
chan.write(errmsg) chan.write(errmsg)
chan.flush() chan.flush()
logging.debug("Sent confirmation on channel %s: %s %s", chan, retval, errmsg) logging.debug(
"Sent confirmation on channel %s: %s %s",
chan,
retval,
errmsg,
)
def recv_confirmation(chan): def recv_confirmation(chan):
logging.debug("Waiting for confirmation on channel %s", chan) logging.debug("Waiting for confirmation on channel %s", chan)
r = chan.read(2) r = chan.read(2)
if len(r) == 0: if len(r) == 0:
# This happens when the remote domain does not exist. # This happens when the remote domain does not exist.
r, errmsg = 125, "domain does not exist" r, errmsg = 125, "domain does not exist"
logging.debug("No confirmation: %s %s", r, errmsg) logging.debug("No confirmation: %s %s", r, errmsg)
return r, errmsg return r, errmsg
assert len(r) == 2, r assert len(r) == 2, r
r = struct.unpack("!H", r)[0] r = struct.unpack("!H", r)[0]
l = chan.read(4) lc = chan.read(4)
assert len(l) == 4, l assert len(lc) == 4, lc
l = struct.unpack("!I", l)[0] lu = struct.unpack("!I", lc)[0]
errmsg = chan.read(l) errmsg = chan.read(lu)
logging.debug("Received confirmation: %s %s", r, errmsg) logging.debug("Received confirmation: %s %s", r, errmsg)
return r, errmsg return r, errmsg
class MyThread(threading.Thread): class MyThread(threading.Thread):
def run(self):
def run(self): try:
try: self._run()
self._run() except Exception:
except Exception as e: n = threading.current_thread().name
logging.error("%s: unexpected exception", threading.currentThread()) logging.error("%s: unexpected exception", n)
tb = traceback.format_exc() tb = traceback.format_exc()
logging.error("%s: traceback: %s", threading.currentThread(), tb) logging.error("%s: traceback: %s", n, tb)
logging.error("%s: exiting program", threading.currentThread()) logging.error("%s: exiting program", n)
os._exit(124) os._exit(124)
class SignalSender(MyThread): class SignalSender(MyThread):
def __init__(self, signals, sigqueue):
"""Handles signals by pushing them into a file-like object."""
threading.Thread.__init__(self)
self.daemon = True
self.queue = Queue()
self.sigqueue = sigqueue
for sig in signals:
signal.signal(sig, self.copy)
def __init__(self, signals, sigqueue): def copy(self, signum, frame):
"""Handles signals by pushing them into a file-like object.""" self.queue.put(signum)
threading.Thread.__init__(self) logging.debug("Signal %s pushed to queue", signum)
self.setDaemon(True)
self.queue = queue.Queue()
self.sigqueue = sigqueue
for sig in signals:
signal.signal(sig, self.copy)
def copy(self, signum, frame): def _run(self):
self.queue.put(signum) while True:
logging.debug("Signal %s pushed to queue", signum) signum = self.queue.get()
logging.debug("Dequeued signal %s", signum)
def _run(self): if signum is None:
while True: break
signum = self.queue.get() assert signum > 0
logging.debug("Dequeued signal %s", signum) self.sigqueue.write(struct.pack("!H", signum))
if signum is None: self.sigqueue.flush()
break logging.debug("Wrote signal %s to remote end", signum)
assert signum > 0
self.sigqueue.write(struct.pack("!H", signum))
self.sigqueue.flush()
logging.debug("Wrote signal %s to remote end", signum)
class Signaler(MyThread): class Signaler(MyThread):
def __init__(self, process, sigqueue):
"""Reads integers from a file-like object and relays that as kill()."""
threading.Thread.__init__(self)
self.daemon = True
self.process = process
self.sigqueue = sigqueue
def __init__(self, process, sigqueue): def _run(self):
"""Reads integers from a file-like object and relays that as kill().""" while True:
threading.Thread.__init__(self) data = self.sigqueue.read(2)
self.setDaemon(True) if len(data) == 0:
self.process = process logging.debug("Received no signal data")
self.sigqueue = sigqueue break
assert len(data) == 2
def _run(self): signum = struct.unpack("!H", data)[0]
while True: logging.debug(
data = self.sigqueue.read(2) "Received relayed signal %s, sending to process %s",
if len(data) == 0: signum,
logging.debug("Received no signal data") self.process.pid,
break )
assert len(data) == 2 try:
signum = struct.unpack("!H", data)[0] self.process.send_signal(signum)
logging.debug("Received relayed signal %s, sending to process %s", signum, self.process.pid) except BaseException as e:
try: logging.error(
self.process.send_signal(signum) "Failed to relay signal %s to process %s: %s",
except BaseException as e: signum,
logging.error("Failed to relay signal %s to process %s: %s", signum, self.process.pid, e) self.process.pid,
logging.debug("End of signaler") e,
)
logging.debug("End of signaler")
def write(dst, buffer, l): def write(dst, buffer, ln):
alreadywritten = 0 alreadywritten = 0
mv = memoryview(buffer)[:l] mv = memoryview(buffer)[:ln]
while len(mv): while len(mv):
dst.write(mv) dst.write(mv)
writtenthisloop = len(mv) writtenthisloop = len(mv)
@ -199,10 +224,10 @@ def write(dst, buffer, l):
alreadywritten = alreadywritten + writtenthisloop alreadywritten = alreadywritten + writtenthisloop
def copy(src, dst, buffer, l): def copy(src, dst, buffer, ln):
alreadyread = 0 alreadyread = 0
mv = memoryview(buffer)[:l] mv = memoryview(buffer)[:ln]
assert len(mv) == l, "Buffer object is too small: %s %s" % (len(mv), l) assert len(mv) == ln, "Buffer object is too small: %s %s" % (len(mv), ln)
while len(mv): while len(mv):
_, _, _ = select.select([src], (), ()) _, _, _ = select.select([src], (), ())
readthisloop = src.readinto(mv) readthisloop = src.readinto(mv)
@ -210,220 +235,253 @@ def copy(src, dst, buffer, l):
raise Exception("copy: Failed to read any bytes") raise Exception("copy: Failed to read any bytes")
mv = mv[readthisloop:] mv = mv[readthisloop:]
alreadyread = alreadyread + readthisloop alreadyread = alreadyread + readthisloop
return write(dst, buffer, l) return write(dst, buffer, ln)
class DataMultiplexer(MyThread): class DataMultiplexer(MyThread):
def __init__(self, sources, sink):
threading.Thread.__init__(self)
self.daemon = True
self.sources = dict((s, num) for num, s in enumerate(sources))
self.sink = sink
def __init__(self, sources, sink): def _run(self):
threading.Thread.__init__(self) logging.debug(
self.setDaemon(True) "mux: Started with sources %s and sink %s", self.sources, self.sink
self.sources = dict((s,num) for num, s in enumerate(sources)) )
self.sink = sink buffer = bytearray(MAX_MUX_READ)
while self.sources:
def _run(self): sources, _, x = select.select(
logging.debug("mux: Started with sources %s and sink %s", self.sources, self.sink) (s for s in self.sources), (), (s for s in self.sources)
buffer = bytearray(MAX_MUX_READ) )
while self.sources: assert not x, x
sources, _, x = select.select((s for s in self.sources), (), (s for s in self.sources)) for s in sources:
assert not x, x n = self.sources[s]
for s in sources: logging.debug("mux: Source %s (%s) is active", n, s)
n = self.sources[s] readthisloop = s.readinto(buffer)
logging.debug("mux: Source %s (%s) is active", n, s) if readthisloop == 0:
readthisloop = s.readinto(buffer) logging.debug(
if readthisloop == 0: "mux: Received no bytes from source %s, signaling"
logging.debug("mux: Received no bytes from source %s, signaling peer to close corresponding source", n) " peer to close corresponding source",
del self.sources[s] n,
header = struct.pack(PACKFORMAT, n, False, 0) )
self.sink.write(header) del self.sources[s]
continue header = struct.pack(PACKFORMAT, n, False, 0)
l = readthisloop self.sink.write(header)
header = struct.pack(PACKFORMAT, n, True, l) continue
self.sink.write(header) ln = readthisloop
write(self.sink, buffer, l) header = struct.pack(PACKFORMAT, n, True, ln)
logging.debug("mux: End of data multiplexer") self.sink.write(header)
write(self.sink, buffer, ln)
logging.debug("mux: End of data multiplexer")
class DataDemultiplexer(MyThread): class DataDemultiplexer(MyThread):
def __init__(self, source, sinks):
threading.Thread.__init__(self)
self.daemon = True
self.sinks = dict(enumerate(sinks))
self.source = source
def __init__(self, source, sinks): def _run(self):
threading.Thread.__init__(self) logging.debug(
self.setDaemon(True) "demux: Started with source %s and sinks %s",
self.sinks = dict(enumerate(sinks)) self.source,
self.source = source self.sinks,
)
buffer = bytearray(MAX_MUX_READ)
while self.sinks:
r, _, x = select.select([self.source], (), [self.source])
assert not x, x
for s in r:
header = s.read(PACKLEN)
if header == b"":
logging.debug(
"demux: Received no bytes from source, closing sinks",
)
for sink in self.sinks.values():
sink.close()
self.sinks = []
break
n, active, ln = struct.unpack(PACKFORMAT, header)
if not active:
logging.debug(
"demux: Source %s inactive, closing matching sink %s",
s,
self.sinks[n],
)
self.sinks[n].close()
del self.sinks[n]
else:
copy(self.source, self.sinks[n], buffer, ln)
logging.debug("demux: End of data demultiplexer")
def _run(self):
logging.debug("demux: Started with source %s and sinks %s", self.source, self.sinks)
buffer = bytearray(MAX_MUX_READ)
while self.sinks:
r, _, x = select.select([self.source], (), [self.source])
assert not x, x
for s in r:
header = s.read(PACKLEN)
if header == "":
logging.debug("demux: Received no bytes from source, closing all sinks")
for sink in self.sinks.values():
sink.close()
self.sinks = []
break
n, active, l = struct.unpack(PACKFORMAT, header)
if not active:
logging.debug("demux: Source %s now inactive, closing corresponding sink %s", s, self.sinks[n])
self.sinks[n].close()
del self.sinks[n]
else:
copy(self.source, self.sinks[n], buffer, l)
logging.debug("demux: End of data demultiplexer")
def quotedargs():
return " ".join(quote(x) for x in sys.argv[1:])
def quotedargs_ellipsized(cmdlist):
text = " ".join(quote(x) for x in cmdlist)
if len(text) > 80:
text = text[:77] + "..."
return text
def main_master(): def main_master():
global logging set_proc_name("bombshell-client (master) %s" % quotedargs())
logging = LoggingEmu("master") global logging
logging = LoggingEmu("master")
logging.info("Started with arguments: %s", sys.argv[1:]) logging.info("Started with arguments: %s", quotedargs_ellipsized(sys.argv[1:]))
global debug_enabled global debug_enabled
args = sys.argv[1:] args = sys.argv[1:]
if args[0] == "-d": if args[0] == "-d":
args = args[1:] args = args[1:]
debug_enabled = True debug_enabled = True
remote_vm = args[0] remote_vm = args[0]
remote_command = args[1:] remote_command = args[1:]
assert remote_command assert remote_command
def anypython(exe): def anypython(exe):
return "` test -x %s && echo %s || echo python`" % (pipes.quote(exe), return "` test -x %s && echo %s || echo python3`" % (
pipes.quote(exe)) quote(exe),
quote(exe),
remote_helper_text = b"exec "
remote_helper_text += bytes(anypython(sys.executable), "utf-8")
remote_helper_text += bytes(" -u -c ", "utf-8")
remote_helper_text += bytes(pipes.quote(open(__file__, "r").read()), "ascii")
remote_helper_text += b" -d " if debug_enabled else b" "
remote_helper_text += base64.b64encode(pickle.dumps(remote_command, 2))
remote_helper_text += b"\n"
saved_stderr = openfdforappend(os.dup(sys.stderr.fileno()))
with mutexfile(os.path.expanduser("~/.bombshell-lock")):
try:
p = subprocess.Popen(
["qrexec-client-vm", remote_vm, "qubes.VMShell"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
close_fds=True,
preexec_fn=os.setpgrp,
bufsize=0,
) )
except OSError as e:
remote_helper_text = b"exec "
remote_helper_text += bytes(anypython(sys.executable), "utf-8")
remote_helper_text += bytes(" -u -c ", "utf-8")
remote_helper_text += bytes(
quote(open(__file__, "r").read()),
"ascii",
)
remote_helper_text += b" -d " if debug_enabled else b" "
remote_helper_text += base64.b64encode(pickle.dumps(remote_command, 2))
remote_helper_text += b"\n"
saved_stderr = openfdforappend(os.dup(sys.stderr.fileno()))
try:
p = subprocess.Popen(
["qrexec-client-vm", remote_vm, "qubes.VMShell"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
close_fds=True,
preexec_fn=os.setpgrp,
bufsize=0,
)
except OSError as e:
logging.error("cannot launch qrexec-client-vm: %s", e) logging.error("cannot launch qrexec-client-vm: %s", e)
return 127 return 127
logging.debug("Writing the helper text into the other side") logging.debug("Writing the helper text into the other side")
p.stdin.write(remote_helper_text) p.stdin.write(remote_helper_text)
p.stdin.flush() p.stdin.flush()
confirmation, errmsg = recv_confirmation(p.stdout) confirmation, errmsg = recv_confirmation(p.stdout)
if confirmation != 0: if confirmation != 0:
logging.error("remote: %s", errmsg) logging.error("remote: %s", errmsg)
return confirmation return confirmation
handled_signals = ( handled_signals = (
signal.SIGINT, signal.SIGINT,
signal.SIGABRT, signal.SIGABRT,
signal.SIGALRM, signal.SIGALRM,
signal.SIGTERM, signal.SIGTERM,
signal.SIGUSR1, signal.SIGUSR1,
signal.SIGUSR2, signal.SIGUSR2,
signal.SIGTSTP, signal.SIGTSTP,
signal.SIGCONT, signal.SIGCONT,
) )
read_signals, write_signals = pairofpipes() read_signals, write_signals = pairofpipes()
signaler = SignalSender(handled_signals, write_signals) signaler = SignalSender(handled_signals, write_signals)
signaler.setName("master signaler") signaler.name = "master signaler"
signaler.start() signaler.start()
muxer = DataMultiplexer([sys.stdin, read_signals], p.stdin) muxer = DataMultiplexer([sys.stdin, read_signals], p.stdin)
muxer.setName("master multiplexer") muxer.name = "master multiplexer"
muxer.start() muxer.start()
demuxer = DataDemultiplexer(p.stdout, [sys.stdout, saved_stderr]) demuxer = DataDemultiplexer(p.stdout, [sys.stdout, saved_stderr])
demuxer.setName("master demultiplexer") demuxer.name = "master demultiplexer"
demuxer.start() demuxer.start()
retval = p.wait() retval = p.wait()
logging.info("Return code %s for qubes.VMShell proxy", retval) logging.info("Return code %s for qubes.VMShell proxy", retval)
demuxer.join() demuxer.join()
logging.info("Ending bombshell") logging.info("Ending bombshell")
return retval return retval
def pairofpipes(): def pairofpipes():
read, write = os.pipe() read, write = os.pipe()
return os.fdopen(read, "rb", 0), os.fdopen(write, "wb", 0) return os.fdopen(read, "rb", 0), os.fdopen(write, "wb", 0)
def main_remote(): def main_remote():
global logging set_proc_name("bombshell-client (remote) %s" % quotedargs())
logging = LoggingEmu("remote") global logging
logging = LoggingEmu("remote")
logging.info("Started with arguments: %s", sys.argv[1:]) logging.info("Started with arguments: %s", quotedargs_ellipsized(sys.argv[1:]))
global debug_enabled global debug_enabled
if "-d" in sys.argv[1:]: if "-d" in sys.argv[1:]:
debug_enabled = True debug_enabled = True
cmd = sys.argv[2] cmd = sys.argv[2]
else: else:
cmd = sys.argv[1] cmd = sys.argv[1]
cmd = pickle.loads(base64.b64decode(cmd)) cmd = pickle.loads(base64.b64decode(cmd))
logging.debug("Received command: %s", cmd) logging.debug("Received command: %s", cmd)
nicecmd = " ".join(pipes.quote(a) for a in cmd) nicecmd = " ".join(quote(a) for a in cmd)
try: try:
p = subprocess.Popen( p = subprocess.Popen(
cmd, cmd,
# ["strace", "-s4096", "-ff"] + cmd, # ["strace", "-s4096", "-ff"] + cmd,
stdin = subprocess.PIPE, stdin=subprocess.PIPE,
stdout = subprocess.PIPE, stdout=subprocess.PIPE,
stderr = subprocess.PIPE, stderr=subprocess.PIPE,
close_fds=True, close_fds=True,
bufsize=0, bufsize=0,
) )
send_confirmation(sys.stdout, 0, b"") send_confirmation(sys.stdout, 0, b"")
except OSError as e: except OSError as e:
msg = "cannot execute %s: %s" % (nicecmd, e) msg = "cannot execute %s: %s" % (nicecmd, e)
logging.error(msg) logging.error(msg)
send_confirmation(sys.stdout, 127, bytes(msg, "utf-8")) send_confirmation(sys.stdout, 127, bytes(msg, "utf-8"))
sys.exit(0) sys.exit(0)
except BaseException as e: except BaseException as e:
msg = "cannot execute %s: %s" % (nicecmd, e) msg = "cannot execute %s: %s" % (nicecmd, e)
logging.error(msg) logging.error(msg)
send_confirmation(sys.stdout, 126, bytes(msg, "utf-8")) send_confirmation(sys.stdout, 126, bytes(msg, "utf-8"))
sys.exit(0) sys.exit(0)
signals_read, signals_written = pairofpipes() signals_read, signals_written = pairofpipes()
signaler = Signaler(p, signals_read) signaler = Signaler(p, signals_read)
signaler.setName("remote signaler") signaler.name = "remote signaler"
signaler.start() signaler.start()
demuxer = DataDemultiplexer(sys.stdin, [p.stdin, signals_written]) demuxer = DataDemultiplexer(sys.stdin, [p.stdin, signals_written])
demuxer.setName("remote demultiplexer") demuxer.name = "remote demultiplexer"
demuxer.start() demuxer.start()
muxer = DataMultiplexer([p.stdout, p.stderr], sys.stdout) muxer = DataMultiplexer([p.stdout, p.stderr], sys.stdout)
muxer.setName("remote multiplexer") muxer.name = "remote multiplexer"
muxer.start() muxer.start()
logging.info("Started %s", nicecmd) nicecmd_ellipsized = quotedargs_ellipsized(cmd)
logging.info("Started %s", nicecmd_ellipsized)
retval = p.wait() retval = p.wait()
logging.info("Return code %s for %s", retval, nicecmd) logging.info("Return code %s for %s", retval, nicecmd_ellipsized)
muxer.join() muxer.join()
logging.info("Ending bombshell") logging.info("Ending bombshell")
return retval return retval
sys.stdin = openfdforread(sys.stdin.fileno()) sys.stdin = openfdforread(sys.stdin.fileno())

View File

@ -1 +1 @@
["RELEASE": "25 32 34 35"] ["RELEASE": "q4.2 38 39"]

View File

@ -63,14 +63,20 @@ class x(object):
display = x() display = x()
BUFSIZE = 128*1024 # any bigger and it causes issues because we don't read multiple chunks until completion BUFSIZE = 64*1024 # any bigger and it causes issues because we don't read multiple chunks until completion
CONNECTION_TRANSPORT = "qubes" CONNECTION_TRANSPORT = "qubes"
CONNECTION_OPTIONS = { CONNECTION_OPTIONS = {
'management_proxy': '--management-proxy', 'management_proxy': '--management-proxy',
} }
def debug(text):
return
print(text, file=sys.stderr)
def encode_exception(exc, stream): def encode_exception(exc, stream):
debug("encoding exception")
stream.write('{}\n'.format(len(exc.__class__.__name__)).encode('ascii')) stream.write('{}\n'.format(len(exc.__class__.__name__)).encode('ascii'))
stream.write('{}'.format(exc.__class__.__name__).encode('ascii')) stream.write('{}'.format(exc.__class__.__name__).encode('ascii'))
for attr in "errno", "filename", "message", "strerror": for attr in "errno", "filename", "message", "strerror":
@ -79,6 +85,7 @@ def encode_exception(exc, stream):
def decode_exception(stream): def decode_exception(stream):
debug("decoding exception")
name_len = stream.readline(16) name_len = stream.readline(16)
name_len = int(name_len) name_len = int(name_len)
name = stream.read(name_len) name = stream.read(name_len)
@ -107,6 +114,7 @@ def decode_exception(stream):
def popen(cmd, in_data, outf=sys.stdout): def popen(cmd, in_data, outf=sys.stdout):
debug("popening on remote %s" % type(in_data))
try: try:
p = subprocess.Popen( p = subprocess.Popen(
cmd, shell=False, stdin=subprocess.PIPE, cmd, shell=False, stdin=subprocess.PIPE,
@ -124,9 +132,11 @@ def popen(cmd, in_data, outf=sys.stdout):
outf.write('{}\n'.format(len(err)).encode('ascii')) outf.write('{}\n'.format(len(err)).encode('ascii'))
outf.write(err) outf.write(err)
outf.flush() outf.flush()
debug("finished popening")
def put(out_path): def put(out_path):
debug("dest writing %s" % out_path)
try: try:
f = open(out_path, "wb") f = open(out_path, "wb")
sys.stdout.write(b'Y\n') sys.stdout.write(b'Y\n')
@ -136,18 +146,25 @@ def put(out_path):
return return
while True: while True:
chunksize = int(sys.stdin.readline(16)) chunksize = int(sys.stdin.readline(16))
if chunksize == 0: if not chunksize:
debug("looks like we have no more to read")
break break
chunk = sys.stdin.read(chunksize) while chunksize:
assert len(chunk) == chunksize, ("Mismatch in chunk length", len(chunk), chunksize) debug(type(chunksize))
try: chunk = sys.stdin.read(chunksize)
f.write(chunk) assert chunk
sys.stdout.write(b'Y\n') debug("dest writing %s" % len(chunk))
except (IOError, OSError) as e: try:
sys.stdout.write(b'N\n') f.write(chunk)
encode_exception(e, sys.stdout) except (IOError, OSError) as e:
f.close() sys.stdout.write(b'N\n')
return encode_exception(e, sys.stdout)
f.close()
return
chunksize = chunksize - len(chunk)
debug("remaining %s" % chunksize)
sys.stdout.write(b'Y\n')
sys.stdout.flush()
try: try:
f.flush() f.flush()
except (IOError, OSError) as e: except (IOError, OSError) as e:
@ -155,10 +172,12 @@ def put(out_path):
encode_exception(e, sys.stdout) encode_exception(e, sys.stdout)
return return
finally: finally:
debug("finished writing dest")
f.close() f.close()
def fetch(in_path, bufsize): def fetch(in_path, bufsize):
debug("Fetching from remote %s" % in_path)
try: try:
f = open(in_path, "rb") f = open(in_path, "rb")
except (IOError, OSError) as e: except (IOError, OSError) as e:
@ -206,7 +225,7 @@ sys.stdout = sys.stdout.buffer if hasattr(sys.stdout, 'buffer') else sys.stdout
''' '''
payload = b'\n\n'.join( payload = b'\n\n'.join(
inspect.getsource(x).encode("utf-8") inspect.getsource(x).encode("utf-8")
for x in (encode_exception, popen, put, fetch) for x in (debug, encode_exception, popen, put, fetch)
) + \ ) + \
b''' b'''
@ -255,7 +274,7 @@ class Connection(ConnectionBase):
def set_options(self, task_keys=None, var_options=None, direct=None): def set_options(self, task_keys=None, var_options=None, direct=None):
super(Connection, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct) super(Connection, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# FIXME HORRIBLE WORKAROUND FIXME # FIXME HORRIBLE WORKAROUND FIXME
if task_keys['delegate_to'] and 'management_proxy' in self._options: if task_keys and task_keys['delegate_to'] and self._options and 'management_proxy' in self._options:
self._options['management_proxy'] = '' self._options['management_proxy'] = ''
def __init__(self, play_context, new_stdin, *args, **kwargs): def __init__(self, play_context, new_stdin, *args, **kwargs):
@ -266,7 +285,6 @@ class Connection(ConnectionBase):
self.transport_cmd = kwargs['transport_cmd'] self.transport_cmd = kwargs['transport_cmd']
return return
self.transport_cmd = distutils.spawn.find_executable('qrun') self.transport_cmd = distutils.spawn.find_executable('qrun')
self.transport_cmd = None
if not self.transport_cmd: if not self.transport_cmd:
self.transport_cmd = os.path.join( self.transport_cmd = os.path.join(
os.path.dirname(__file__), os.path.dirname(__file__),
@ -295,7 +313,7 @@ class Connection(ConnectionBase):
if not self._connected: if not self._connected:
remote_cmd = [to_bytes(x, errors='surrogate_or_strict') for x in [ remote_cmd = [to_bytes(x, errors='surrogate_or_strict') for x in [
# 'strace', '-s', '2048', '-o', '/tmp/log', # 'strace', '-s', '2048', '-o', '/tmp/log',
'python', '-u', '-i', '-c', preamble 'python3', '-u', '-i', '-c', preamble
]] ]]
addr = self._play_context.remote_addr addr = self._play_context.remote_addr
proxy = to_bytes(self.get_option("management_proxy")) if self.get_option("management_proxy") else "" proxy = to_bytes(self.get_option("management_proxy")) if self.get_option("management_proxy") else ""
@ -357,16 +375,18 @@ class Connection(ConnectionBase):
cmd = shlex.split(cmd) cmd = shlex.split(cmd)
display.vvvv("EXEC %s" % cmd, host=self._play_context.remote_addr) display.vvvv("EXEC %s" % cmd, host=self._play_context.remote_addr)
try: try:
payload = ('popen(%r, %r)\n' % (cmd, in_data)).encode("utf-8") payload = ('popen(%r, %r)\n\n' % (cmd, in_data)).encode("utf-8")
self._transport.stdin.write(payload) self._transport.stdin.write(payload)
self._transport.stdin.flush() self._transport.stdin.flush()
yesno = self._transport.stdout.readline(2) yesno = self._transport.stdout.readline(2)
debug("Reading yesno")
except Exception: except Exception:
self._abort_transport() self._abort_transport()
raise raise
if yesno == "Y\n" or yesno == b"Y\n": if yesno == "Y\n" or yesno == b"Y\n":
try: try:
retcode = self._transport.stdout.readline(16) retcode = self._transport.stdout.readline(16)
debug("Reading retcode")
try: try:
retcode = int(retcode) retcode = int(retcode)
except Exception: except Exception:
@ -403,6 +423,7 @@ class Connection(ConnectionBase):
else: else:
self._abort_transport() self._abort_transport()
raise errors.AnsibleError("pass/fail from remote end is unexpected: %r" % yesno) raise errors.AnsibleError("pass/fail from remote end is unexpected: %r" % yesno)
debug("finished popening on master")
def put_file(self, in_path, out_path): def put_file(self, in_path, out_path):
'''Transfer a file from local to VM.''' '''Transfer a file from local to VM.'''
@ -424,6 +445,7 @@ class Connection(ConnectionBase):
with open(in_path, 'rb') as in_file: with open(in_path, 'rb') as in_file:
while True: while True:
chunk = in_file.read(BUFSIZE) chunk = in_file.read(BUFSIZE)
debug("source writing %s bytes" % len(chunk))
try: try:
self._transport.stdin.write(("%s\n" % len(chunk)).encode("utf-8")) self._transport.stdin.write(("%s\n" % len(chunk)).encode("utf-8"))
self._transport.stdin.flush() self._transport.stdin.flush()
@ -443,9 +465,15 @@ class Connection(ConnectionBase):
else: else:
self._abort_transport() self._abort_transport()
raise errors.AnsibleError("pass/fail from remote end is unexpected: %r" % yesno) raise errors.AnsibleError("pass/fail from remote end is unexpected: %r" % yesno)
debug("on this side it's all good")
self._transport.stdin.write(("%s\n" % 0).encode("utf-8"))
self._transport.stdin.flush()
debug("finished writing source")
def fetch_file(self, in_path, out_path): def fetch_file(self, in_path, out_path):
'''Fetch a file from VM to local.''' '''Fetch a file from VM to local.'''
debug("fetching to local")
super(Connection, self).fetch_file(in_path, out_path) super(Connection, self).fetch_file(in_path, out_path)
display.vvvv("FETCH %s to %s" % (in_path, out_path), host=self._play_context.remote_addr) display.vvvv("FETCH %s to %s" % (in_path, out_path), host=self._play_context.remote_addr)
in_path = _prefix_login_path(in_path) in_path = _prefix_login_path(in_path)

View File

@ -24,13 +24,13 @@ Integrate this software into your Ansible setup (within your `managevm`) VM) by:
## Set up the policy file for `qubes.VMShell` ## Set up the policy file for `qubes.VMShell`
Edit (as `root`) the file `/etc/qubes-rpc/policy/qubes.VMShell` Edit (as `root`) the file `/etc/qubes/policy.d/80-ansible-qubes.policy`
located on the file system of your `dom0`. located on the file system of your `dom0`.
At the top of the file, add the following two lines: At the top of the file, add the following two lines:
``` ```
managevm $anyvm allow qubes.VMShell * managevm * allow
``` ```
This first line lets `managevm` execute any commands on any VM on your This first line lets `managevm` execute any commands on any VM on your
@ -41,25 +41,21 @@ security prompt to allow `qubes.VMShell` on the target VM you're managing.
Now save that file, and exit your editor. Now save that file, and exit your editor.
If your dom0 has a file `/etc/qubes-rpc/policy/qubes.VMShell`,
you can delete it now. It is obsolete.
### Optional: allow `managevm` to manage `dom0` ### Optional: allow `managevm` to manage `dom0`
Before the line you added in the previous step, add this line: The next step is to add the RPC service proper to dom0. Edit the file
```
managevm dom0 allow
```
This line lets `managevm` execute any commands in `dom0`. Be sure you
understand the security implications of such a thing.
The next step is to add the RPC service proper. Edit the file
`/etc/qubes-rpc/qubes.VMShell` to have a single line that contains: `/etc/qubes-rpc/qubes.VMShell` to have a single line that contains:
``` ```
exec bash exec bash
``` ```
That is it. `dom0` should work now. Make the file executable.
That is it. `dom0` should work now. Note you do this at your own risk.
## Test `qrun` works ## Test `qrun` works

View File

@ -13,11 +13,11 @@ to set up a policy that allows us to remotely execute commands on any VM of the
network server, without having to be physically present to click any dialogs authorizing network server, without having to be physically present to click any dialogs authorizing
the execution of those commands. the execution of those commands.
In `dom0` of your Qubes server, edit `/etc/qubes-rpc/policy/qubes.VMShell` to add, In `dom0` of your Qubes server, edit `/etc/qubes/policy.d/80-ansible-qubes.policy` to add,
at the top of the file, a policy that looks like this: at the top of the file, a policy that looks like this:
``` ```
exp-manager $anyvm allow qubes.VMShell * managevm * allow
``` ```
This tells Qubes OS that `exp-manager` is now authorized to run any command in any of the VMs. This tells Qubes OS that `exp-manager` is now authorized to run any command in any of the VMs.
@ -25,13 +25,13 @@ This tells Qubes OS that `exp-manager` is now authorized to run any command in a
**Security note**: this does mean that anyone with access to `exp-manager` can do **Security note**: this does mean that anyone with access to `exp-manager` can do
literally anything on any of your VMs in your Qubes OS server. literally anything on any of your VMs in your Qubes OS server.
If that is not what you want, then replace `$anyvm` with the name of the VMs you would like If that is not what you want, then replace `*` after `managevm` with the name of the VMs you
to manage. For example: if you would like `exp-manager` to be authorized to run commands would like to manage. For example: if you would like `exp-manager` to be authorized to run
*only* on `exp-net`, then you can use the following policy: commands *only* on `exp-net`, then you can use the following policy:
``` ```
exp-manager exp-net allow qubes.VMShell * exp-manager exp-net allow
exp-manager $anyvm deny qubes.VMShell * exp-manager @anyvm deny
``` ```
Try it out now. SSH from your manager machine into `exp-manager` and run: Try it out now. SSH from your manager machine into `exp-manager` and run:
@ -47,7 +47,7 @@ You should see `yes` followed by `exp-net` on the output side.
If you expect that you will need to run commands in `dom0` from your manager machine If you expect that you will need to run commands in `dom0` from your manager machine
(say, to create, stop, start and modify VMs in the Qubes OS server), (say, to create, stop, start and modify VMs in the Qubes OS server),
then you will have to create a file `/etc/qubes-rpc/qubes.VMShell` as `root` in `dom0`, then you will have to create a file `/etc/qubes-rpc/qubes.VMShell` as `root` in `dom0`,
with the contents `/bin/bash` and permission mode `0644`. Doing this will enable you with the contents `/bin/bash` and permission mode `0755`. Doing this will enable you
to run commands on `dom0` which you can subsequently test in `exp-manager` by running command: to run commands on `dom0` which you can subsequently test in `exp-manager` by running command:
``` ```
@ -57,7 +57,7 @@ qvm-run dom0 'echo yes ; hostname'
like you did before. like you did before.
**Security note**: this does mean that anyone with access to `exp-manager` can do **Security note**: this does mean that anyone with access to `exp-manager` can do
literally anything on your Qubes OS server. *literally anything* on your Qubes OS server. You have been warned.
## Integrate your Ansible setup ## Integrate your Ansible setup

30
lookup_plugins/jq.py Normal file
View File

@ -0,0 +1,30 @@
from ansible.errors import AnsibleError
from ansible.plugins.lookup import LookupBase
import json
import sys
import subprocess
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
UNDEFINED = object()
class LookupModule(LookupBase):
def run(self, args, variables):
i = json.dumps(args[0])
c = ["jq", args[1]]
p = subprocess.Popen(c, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
o, e = p.communicate(i)
r = p.wait()
if r != 0 or e:
assert 0, e
raise subprocess.CalledProcessError(r, c, o, e)
r = json.loads(o)
return r