VRFs / Jails / Containers

classic Classic list List threaded Threaded
25 messages Options
12
Reply | Threaded
Open this post in threaded view
|

VRFs / Jails / Containers

Grant Taylor-2
Does Gentoo have any support for VRFs or (chroot) Jails or Containers
without going down the Docker (et al) path?

I'm wanting to do some things with a Gentoo router that is trivial to do
with network namespaces via manual commands ~> scripts.  But that's far
from standard Gentoo init script based system.  And I'd like something
more Gentoo standards based.

Does Gentoo have or support anything like this natively?  Or am I
getting into territory where I'm rolling my own?

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Bill Kenworthy
On 3/2/19 10:32 am, Grant Taylor wrote:

> Does Gentoo have any support for VRFs or (chroot) Jails or Containers
> without going down the Docker (et al) path?
>
> I'm wanting to do some things with a Gentoo router that is trivial to
> do with network namespaces via manual commands ~> scripts.  But that's
> far from standard Gentoo init script based system.  And I'd like
> something more Gentoo standards based.
>
> Does Gentoo have or support anything like this natively?  Or am I
> getting into territory where I'm rolling my own


LXC containers ??


BillK


Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Grant Taylor-2
On 2/2/19 7:36 PM, Bill Kenworthy wrote:
> LXC containers ??

Maybe.

I just feel like that's more heavy weight than I want.

I'm functionally running a series of ip commands to configure networking
in a special way.

Maybe I should look into what it takes to extend netifrc to support what
I want.  I sort of think that VRF could model off of bonding and / or
bridge and / or VLAN devices.  At least in the master / slave aspect.

I'm sure that veth will be a new concept, but it may be able to model
after a tunnel interface.

It would be really nice to have network namespace support.  But I don't
see anything that could be modeled off of.

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Michael Jones
systemd-nspawn is also an option, but I don't think that'll work with OpenRC.

On Sat, Feb 2, 2019 at 9:56 PM Grant Taylor <[hidden email]> wrote:
On 2/2/19 7:36 PM, Bill Kenworthy wrote:
> LXC containers ??

Maybe.

I just feel like that's more heavy weight than I want.

I'm functionally running a series of ip commands to configure networking
in a special way.

Maybe I should look into what it takes to extend netifrc to support what
I want.  I sort of think that VRF could model off of bonding and / or
bridge and / or VLAN devices.  At least in the master / slave aspect.

I'm sure that veth will be a new concept, but it may be able to model
after a tunnel interface.

It would be really nice to have network namespace support.  But I don't
see anything that could be modeled off of.

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Grant Taylor-2
On 2/2/19 9:39 PM, Michael Jones wrote:
> systemd-nspawn is also an option, but I don't think that'll work with
> OpenRC.

Ya....  I moved (back to) Gentoo to get away from systemd.  I'm not
going to voluntarily opt to use it, or any of it's children.  That's
/my/ opinion.  I know others opinions differ.

Thank you for the information all the same.

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Bill Kenworthy
On 3/2/19 12:52 pm, Grant Taylor wrote:

> On 2/2/19 9:39 PM, Michael Jones wrote:
>> systemd-nspawn is also an option, but I don't think that'll work with
>> OpenRC.
>
> Ya....  I moved (back to) Gentoo to get away from systemd.  I'm not
> going to voluntarily opt to use it, or any of it's children.  That's
> /my/ opinion.  I know others opinions differ.
>
> Thank you for the information all the same.
>
I am unclear on what you are trying to do.  I find the gentoo scripts
good for the simple case but a complex case almost always needs extra
help.  If its networking, could something like shorewall help?

BillK



Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Alarig Le Lay
In reply to this post by Grant Taylor-2
For the VRF part, Gentoo supports it; it’s in the upstream kernel
sources.

I only tried it once, but failed because my sshd should have been lunch
in my VRF and I didn’t quickly find a way to do it.

But otherwise, it worked.

--
Alarig

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Rich Freeman
In reply to this post by Grant Taylor-2
On Sat, Feb 2, 2019 at 11:52 PM Grant Taylor
<[hidden email]> wrote:
>
> On 2/2/19 9:39 PM, Michael Jones wrote:
> > systemd-nspawn is also an option, but I don't think that'll work with
> > OpenRC.
>
> Ya....  I moved (back to) Gentoo to get away from systemd.  I'm not
> going to voluntarily opt to use it, or any of it's children.  That's
> /my/ opinion.  I know others opinions differ.
>

Nothing wrong with that approach.  I use systemd-nspawn to run a bunch
of containers, hosted in Gentoo, and many of which run Gentoo.
However, these all run systemd and I don't believe you can run nspawn
without a systemd host (the guest/container can be anything).  These
are containers running full distros with systemd in my case, not just
single-process containers, in my case.  However, nspawn does support
single-process containers, and that includes with veth, but nspawn
WON'T initialize networking in those containers (ie DHCP/etc), leaving
this up to the guest (it does provide a config file for
systemd-networkd inside the guest if it is in use to autoconfigure
DHCP).

I'm not exactly certain what you're trying to accomplish, but
namespaces are just a kernel system call when it comes down to it (two
of them I think offhand).  Two util-linux programs provide direct
access to them for shell scripts: unshare and nsenter.  If you're just
trying to run a process in a separate namespace so that it can use
veth/etc then you could probably initialize that in a script run from
unshare.  If you don't need more isolation you could run it right from
the host filesystem without a separate mount or process namespace.  Or
you could create a new mount namespace but only modify specific parts
of it like /var/lib or whatever.

People generally equate containers with docker but as you seem to get
you can do a lot with namespaces without basically running completely
independent distros.  Now, I will point out that there are good
reasons for keeping things separate - they may or may not apply to
your application.  If you just want to run a single daemon on 14
different IPs and have each of those daemons see the same filesystem
minus /var/lib and /etc that is something you could certainly do with
namespaces and the only resource cost would be the storage of the
extra /var/lib and /etc directories (they could even use the same
shared libraries in RAM, and indeed the same process image itself I
think).

The only gotcha is that I'm not sure how much of it is already done,
so you may have to roll your own.  If you find generic solutions for
running services in partially-isolated namespaces with network
initialization taken care of for you I'd be very interested in hearing
about it.

--
Rich

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Michael Orlitzky
In reply to this post by Grant Taylor-2
On 2/2/19 10:56 PM, Grant Taylor wrote:

> On 2/2/19 7:36 PM, Bill Kenworthy wrote:
>> LXC containers ??
>
> Maybe.
>
> I just feel like that's more heavy weight than I want.
>
> I'm functionally running a series of ip commands to configure networking
> in a special way.
>

You can add commands to your existing network configuration that will be
run when an interface comes up. For example, in /etc/conf.d/net,

   ifup_wlan0="iwconfig \$int key s:secretkey enc open essid foobar"

(taken from the example file that ships with OpenRC).

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Grant Taylor-2
In reply to this post by Rich Freeman
On 2/3/19 5:37 AM, Rich Freeman wrote:

> Nothing wrong with that approach.  I use systemd-nspawn to run a bunch
> of containers, hosted in Gentoo, and many of which run Gentoo.  However,
> these all run systemd and I don't believe you can run nspawn without a
> systemd host (the guest/container can be anything).  These are containers
> running full distros with systemd in my case, not just single-process
> containers, in my case.  However, nspawn does support single-process
> containers, and that includes with veth, but nspawn WON'T initialize
> networking in those containers (ie DHCP/etc), leaving this up to the guest
> (it does provide a config file for systemd-networkd inside the guest if
> it is in use to autoconfigure DHCP).

ACK

That makes me think that systemd-nspawn is less of a fit for what I'm
wanting to do.

> I'm not exactly certain what you're trying to accomplish, but namespaces
> are just a kernel system call when it comes down to it (two of them I
> think offhand).  Two util-linux programs provide direct access to them
> for shell scripts: unshare and nsenter.  If you're just trying to run a
> process in a separate namespace so that it can use veth/etc then you could
> probably initialize that in a script run from unshare.  If you don't need
> more isolation you could run it right from the host filesystem without
> a separate mount or process namespace.  Or you could create a new mount
> namespace but only modify specific parts of it like /var/lib or whatever.

That's quite close to what I'm doing.  I'm actually using unshare to
create a mount / network / UTS namespace (set) and then running some
commands in them.

The namespaces are functioning as routers.  I have an OvS switch
connected to the main / default (unnamed) namespace and nine (internal)
OvS ports, each one in a different namespace.  Thus forming a backbone
between the ten network namespaces.

Each of the nine network namespaces then has a veth pair that connects
back to the main network namespace as an L2 interface that VirtualBox
(et al) can glom onto as necessary.

This way I can easily have nine completely different networks that VMs
can use.  My main home network has a route to these networks via my
workstation.  (I'm actually using routing protocols to distribute this.)

So the main use of the network namespaces is as a basic IP router.
There doesn't /need/ to be any processes running in them.  I do run BIRD
in the network namespaces for simplicity reasons.  But that's more
ancillary.

I don't strictly need the mount namespaces for what I'm currently doing.
  That's left over from when I was running Quagga and /needed/ to alter
some mounts to run multiple instances of Quagga on the same machine.

I do like the UTS namespace so that each ""router has a different host
name when I enter it.

Maybe this helps explain /what/ I'm doing.  As for /why/ I'm doing it,
well because reasons.  Maybe not even good reasons.  But I'm still doing
it.  ¯\_(ツ)_/¯  I'm happy to discuss this in a private thread if anyone
is really curious.

> People generally equate containers with docker but as you seem to get
> you can do a lot with namespaces without basically running completely
> independent distros.

Yep.  I feel like independent distros, plus heavier weight management
daemons on top are a LOT more than I want.

As stated, I don't really /need/ to run processes in the containers.  I
do because it's easy.  The only thing I /need/ is the separate IP stack
/ configuration.

> Now, I will point out that there are good reasons for keeping things
> separate - they may or may not apply to your application.  If you just
> want to run a single daemon on 14 different IPs and have each of those
> daemons see the same filesystem minus /var/lib and /etc that is something
> you could certainly do with namespaces and the only resource cost would
> be the storage of the extra /var/lib and /etc directories (they could
> even use the same shared libraries in RAM, and indeed the same process
> image itself I think).

Yep.

> The only gotcha is that I'm not sure how much of it is already done, so
> you may have to roll your own.  If you find generic solutions for running
> services in partially-isolated namespaces with network initialization
> taken care of for you I'd be very interested in hearing about it.

I think there are a LOT of solutions for creating and managing
containers.  (I'm using the term "container" loosely here.)  The thing
is that many of them are each their own heavy weight entity.  I have yet
to find any that integrate well with OS init scripts.

I feel like what I want to do can /almost/ be done with netifrc.  Or
that netifrc could be extended to do what (I think is) /little/
additional work to do it.

I don't know that network namespaces are strictly required.  I've been
using them for years.  That being said, the current incarnation of
Virtual Routing and Forwarding (VRF) provided by l3mdev seems to be very
promising.  I expect that I could make VRF (l3mdev) do what I wanted to
do too.  At least the part that I /need/.  I'm not sure how to launch
processes associated with the VRF (l3mdev).  I'm confident it's
possible, but I've not done it.

But, even VRF (l3mdev) is not supported by netifrc.  I feel like the
Policy Based Routing (PBR) is even a kludge and largely consists of
(parts of) the ip / tc commands being put into the /etc/conf.d/net file.

I feel like bridging / bonding / VLANs have better support than PBR
does.  All of which are way better supported than VRF (l3mdev) which is
better supported than network namespaces.

Though, I'm not really surprised.  All of the init scripts that I've
seen seem to be designed around the premise of a singular system and
have no knowledge that there might be other (virtual) systems.  What
little I know about Docker is that even it's configuration is singular
system in nature and still only applies to the instance that it's
working on.  I've not seen any OS init scripts that are aware of the
fact that they might be working on other systems.  I think the closest
I've seen is FreeBSD jails.  But even that is separate init scripts,
which are again somewhat focused on the jail.

I need to do some thinking about /what/ /specifically/ I want to do
before I start thinking about /how/ to go about doing it.

That being said, I think it would be really nice to have various
interfaces tagged with what NetNS they belong to and use the same
net.$interface type init scripts for them.

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Grant Taylor-2
In reply to this post by Bill Kenworthy
On 2/2/19 11:09 PM, Bill Kenworthy wrote:
> I am unclear on what you are trying to do.

See my reply to Rich's message for a description.

> I find the gentoo scripts good for the simple case but a complex case
> almost always needs extra help.

Yep.

I was hoping that there was something that I was unaware of or could
extend to do what I want to do.

> If its networking, could something like shorewall help?

No, I don't think that Shorewall or a similar firewall config management
system will help.

I also find those systems annoying.  Sure, they have their benefits.
But why do I need them when I should be able to do the same thing on a
stock Gentoo (or other) Linux system?  After all they are using the same
kernel.  (Maybe a different version or config there of.)

I will occasionally look at those solutions and treat them like themed
Lego sets.  I build them, look at them, analyze them, and pull out the
distinct Lego bricks that I want to use in my own system.  }:-)

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Grant Taylor-2
In reply to this post by Alarig Le Lay
On 2/3/19 1:50 AM, Alarig Le Lay wrote:
> For the VRF part, Gentoo supports it; it’s in the upstream kernel
> sources.

Yep.  I've been doing Network Namespaces, and VRF to a lesser degree,
for quite a while now.  It's just all been manual or ad-hock scripts.

> I only tried it once, but failed because my sshd should have been lunch
> in my VRF and I didn’t quickly find a way to do it.

Yep.

That's the type of integration that I've found lacking.

I'm only currently asking about how to configure the various network
components, not even how to run processes inside of the various systems.

> But otherwise, it worked.

It absolutely manually works.  I'm looking for the thing(s) to allow the
Gentoo OS init scripts to take over some of the management.  That's what
I'm finding lacking.  I asked my question because I was hoping that
someone would know about something I didn't.  ;-)

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Grant Taylor-2
In reply to this post by Michael Orlitzky
On 2/3/19 6:26 AM, Michael Orlitzky wrote:
> You can add commands to your existing network configuration that will be
> run when an interface comes up. For example, in /etc/conf.d/net,
>
>    ifup_wlan0="iwconfig \$int key s:secretkey enc open essid foobar"

Ya....  I find that to be an absolute kludge.  Does it work?  Yes.  Is
it clean?  Probably not.  Is it graceful?  Absolutely not.

Think about how it's possible to configure bridging / bonding / VLANs
via various parameters and having netifrc construct the commands that
are run in the background.

I'd love to see something that assumes the commands run in the main /
default / unnamed network namespace / VRF unless otherwise specified.

I'd love to be able to add a parameter to a configuration file that
tells sshd to run in a specific VRF like Alarig was wanting to do.
Heck, I'd like to see init scripts gracefully deal with the fact that
there should be multiple instances of a daemon running, even if they are
simply on different ports, much less different VRFs or namespaces.

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Michael Orlitzky
On 2/3/19 12:39 PM, Grant Taylor wrote:

> On 2/3/19 6:26 AM, Michael Orlitzky wrote:
>> You can add commands to your existing network configuration that will be
>> run when an interface comes up. For example, in /etc/conf.d/net,
>>
>>     ifup_wlan0="iwconfig \$int key s:secretkey enc open essid foobar"
>
> Ya....  I find that to be an absolute kludge.  Does it work?  Yes.  Is
> it clean?  Probably not.  Is it graceful?  Absolutely not.
>
> Think about how it's possible to configure bridging / bonding / VLANs
> via various parameters and having netifrc construct the commands that
> are run in the background.
>

Ultimately netifrc is just a shell script that parses another shell
script to construct a third shell script. I don't think doing it with
only two shell scripts is that much less elegant =)

You could go all the way and write your own OpenRC service as
/etc/init.d/whatever. You can make it depend on the network being up,
and then just write everything that you want it to do into the start
function with the corresponding "undo" steps in the stop function.

If the series of commands is long and complicated and if you sometimes
want to do/undo this subset of the configuration independently, then
that's how I'd do it.

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Laurence Perkins
In reply to this post by Grant Taylor-2


On Sat, 2019-02-02 at 19:32 -0700, Grant Taylor wrote:
+AD4- Does Gentoo have any support for VRFs or (chroot) Jails or
+AD4- Containers
+AD4- without going down the Docker (et al) path?
+AD4-
+AD4- I'm wanting to do some things with a Gentoo router that is trivial to
+AD4- do
+AD4- with network namespaces via manual commands +AH4APg- scripts.  But that's
+AD4- far
+AD4- from standard Gentoo init script based system.  And I'd like
+AD4- something
+AD4- more Gentoo standards based.
+AD4-
+AD4- Does Gentoo have or support anything like this natively?  Or am I
+AD4- getting into territory where I'm rolling my own?
+AD4-

Have you tried firejail?  It gives you convenient ways to set up the
container parameters consistently and is in the repo.  Its invocation
is also simple enough to not clutter up your startup scripts.

LMP
Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Grant Taylor-2
On 02/04/2019 09:23 AM, Laurence Perkins wrote:
> Have you tried firejail?  It gives you convenient ways to set up the
> container parameters consistently and is in the repo.

No, I have not.  Thank you for the pointer.

> Its invocation is also simple enough to not clutter up your startup
> scripts.

I don't think I mind adding things to start up scripts.  I'm more
looking for the most Gentoo<ish> way to do what I'm wanting to do
without relying on something on top of Gentoo.  So if that involves
adding things to start up scripts, I'm cool with it.

I just don't want to add an entire subsystem, like Docker (et al), if I
don't actually have to.

I'm starting to wonder if I'm going to be better off writing new scripts
that will match existing init scripts and their methodology to
(re)start/stop namespaces / containers / jails.  Perhaps firejail will
give me what I want or provide insight.



--
Grant. . . .
unix || die

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Rich Freeman
On Mon, Feb 4, 2019 at 1:44 PM Grant Taylor
<[hidden email]> wrote:
>
> I'm starting to wonder if I'm going to be better off writing new scripts
> that will match existing init scripts and their methodology to
> (re)start/stop namespaces / containers / jails.  Perhaps firejail will
> give me what I want or provide insight.
>

IMO I would separate your container logic from your service manager logic.

If you have a script that launches a container, then all you need is a
generic init.d script that runs it.

I launch nspawn containers from systemd units all the time.  The only
logic in the units is running the command line to start nspawn.

IMO if you start mixing the two it will just make it harder to
maintain.  Sure, an init.d script CAN do anything, but that doesn't
mean that you should do it this way.

Without creating a separate reply I wanted to react to your other
email detailing your config.  It strikes me that you might not even
need containers to set up all those interfaces and the routing between
them.  However, the container probably still makes sense so that
random processes trying to listen on 0.0.0.0 on the host don't end up
attaching to all those virtual interfaces.

Really all you need is some initialization inside each container and
then the kernel is doing all the work.  You don't really need any
userspace process running in the container except for the fact that
kernel namespaces are attached to processes.  As a result, I'd suggest
considering using sysvinit inside your containers to do the work.  You
might run openrc/netifrc to do the network setup inside each
container, or just have sysvinit run a shell script that initializes
and then terminates, leaving init running childless indefinitely (I
assume it supports this).  If you want a process to noop indefinitely
at minimal cost that is basically the definition of what sysvinit
does...

--
Rich

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Grant Taylor-2
In reply to this post by Michael Orlitzky
On 02/03/2019 11:23 AM, Michael Orlitzky wrote:
> Ultimately netifrc is just a shell script that parses another shell
> script to construct a third shell script. I don't think doing it with
> only two shell scripts is that much less elegant =)

The elegance, or lack there of, is not in the number of shell scripts.
Rather the fact that tc (QoS) parameters are stuffed into a command line
verses having things split out and parsed is what I dislike.  Take VLANs
for example, there is a netifrc parameter for specifying the VLAN IDs
that belong on an interface.  Netifrc will then construct the commands.
People don't need to know how to construct the commands themselves to
utilize VLANs.  tc (QoS) is not anywhere nearly as nice.

Bridging and bonding is similarly more graceful than tc (QoS).

> You could go all the way and write your own OpenRC service as
> /etc/init.d/whatever.

That's sort of where I'm gravitating at the moment.  Something I can
(re)start/stop via standard init commands.

> You can make it depend on the network being up, and then just write
> everything that you want it to do into the start function with the
> corresponding "undo" steps in the stop function.

Maybe it will need to depend on the lowest level of networking.  Maybe.
Seeing as how it would provide networking between the host and the
namespaces (containers), I think it would functionally be parallel to
the networking services.  I think namespaces could be up even if the
main network was not.

> If the series of commands is long and complicated and if you sometimes
> want to do/undo this subset of the configuration independently, then
> that's how I'd do it.

The number of commands is really dependent on what I'm doing at a higher
level.  I can see having relatively similar commands for different
namespaces broken out into separate files such that it's easy to
(re)start/stop individual namespaces.  I might see if there's a way to
re-use the same file much like net.<device> is a sym-link to net.lo.



--
Grant. . . .
unix || die

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Grant Taylor-2
In reply to this post by Rich Freeman
On 02/04/2019 11:55 AM, Rich Freeman wrote:
> IMO I would separate your container logic from your service manager logic.

I'm not exactly sure what you mean by "container logic" vs "service
manager logic" and how they differ.  I'm assuming that the former
creates / destroys the container and that the latter manages
(re)starting/stopping services where ever they are at.

> If you have a script that launches a container, then all you need is a
> generic init.d script that runs it.

I guess that's one way to do it.  But that doesn't seem very Gentoo<ish>
to me.

I'd like to see a way that I can have standard service init scripts and
use them where ever I want them, either inside a container or outside on
the host.

As long as I don't want to run the same service in multiple places, I
don't see a problem with doing that.  Multiple instances starts to get
more tricky, but is still possible, and should be location agnostic.

> I launch nspawn containers from systemd units all the time.  The only
> logic in the units is running the command line to start nspawn.
>
> IMO if you start mixing the two it will just make it harder to maintain.
> Sure, an init.d script CAN do anything, but that doesn't mean that you
> should do it this way.

I'm wanting to avoid having an init script that creates the container
and starts services therein.  I'd rather start the container and then
start the services therein using the same type of init scripts, just
called within different contexts.

> Without creating a separate reply I wanted to react to your other email
> detailing your config.  It strikes me that you might not even need
> containers to set up all those interfaces and the routing between them.
> However, the container probably still makes sense so that random processes
> trying to listen on 0.0.0.0 on the host don't end up attaching to all
> those virtual interfaces.

Yes, I could have all the interfaces on the host.  But I'm doing a
number of different things and don't want to spoil the host.

The nice containers that I mentioned are long standing containers.  I
routinely stand up 10 ~ 100 more for various tests.

I'm also using network namespaces as an isolation so that I can easily
do various things with networking without the added complexity of
isolating things from each other via command line or policy based
routing.  Each network namespace can easily have it's view of 0.0.0.0
(as a good example) and it's own routing table.  I don't need to bother
with PBR / ip rules / iptables complexities.  Each NetNS just knows
about it's local interfaces.

> Really all you need is some initialization inside each container and
> then the kernel is doing all the work.  You don't really need any
> userspace process running in the container except for the fact that
> kernel namespaces are attached to processes.

I mostly agree.  I am running BIRD inside the container, but that's more
of a would be nice to have and I can work around not having it.  There
are also the occasional commands that I want to run to do
troubleshooting (ping, traceroute, etc) as well as dynamically modifying
the containers which is usually done via "nsenter …" or "ip netns exec
$NetNSname …" commands.

> As a result, I'd suggest considering using sysvinit inside your
> containers to do the work.

That is a possibility.  But I feel like that's tantamount to saying
"Gentoo doesn't have an answer for what you're wanting to do, so just
use Sys V init scripts."  I don't like it.

I like the idea of re-using standard OpenRC / NetifRC scripts inside the
containers too.  Especially if the services don't conflict anywhere.  To
me, this re-uses the existing Gentoo methodology in different contexts.

> You might run openrc/netifrc to do the network setup inside each
> container, or just have sysvinit run a shell script that initializes
> and then terminates, leaving init running childless indefinitely (I
> assume it supports this).  If you want a process to noop indefinitely
> at minimal cost that is basically the definition of what sysvinit does...

The more that I think about it, largely in response to emails in this
thread, I believe that I want the same overall thing to create the
network between the default / main / unnamed NetNS and the container, as
well as likely re-using the OpenRC / NetifRC scripts to configure things
inside the container.

I think, and would be curious to have someone confirm or refute, that I
could add configuration information to /etc/conf.d/net for the xyz123
interface inside the container and use an /etc/init.d/net.xyz123 init
script sym-linked to /etc/init.d/net.lo script.

My host would not have net.xyz123 in any runlevel.  Certainly not boot
or default.

I think that would mean that I could run rc-service net.xyz123 start
inside the container and re-use existing Gentoo methodology.

Now I wonder if I could use custom runlevels for each container and rely
on standard init system.  }:-)  But that's a different question.



--
Grant. . . .
unix || die

Reply | Threaded
Open this post in threaded view
|

Re: VRFs / Jails / Containers

Rich Freeman
So, I think we're miscommunicating a bit here...

On Mon, Feb 4, 2019 at 4:10 PM Grant Taylor
<[hidden email]> wrote:
>
> On 02/04/2019 11:55 AM, Rich Freeman wrote:
> > IMO I would separate your container logic from your service manager logic.
>
> I'm not exactly sure what you mean by "container logic" vs "service
> manager logic" and how they differ.  I'm assuming that the former
> creates / destroys the container and that the latter manages
> (re)starting/stopping services where ever they are at.

I'm saying that an init.d script shouldn't try to do anything other
than initialize a service, which should be implemented outside the
init.d script.

So, if you have a shell script that launches a container, then you
should call it from the init.d script.  You shouldn't merge them into
a single init.d script that has 30 lines of container setup logic or
whatever.

>
> I'd like to see a way that I can have standard service init scripts and
> use them where ever I want them, either inside a container or outside on
> the host.

Of course.  That shell script that launches a container could very
well just launch sysvinit which runs openrc which runs another set of
init.d scripts INSIDE the container to initialize it.

> I'm wanting to avoid having an init script that creates the container
> and starts services therein.  I'd rather start the container and then
> start the services therein using the same type of init scripts, just
> called within different contexts.

Yup - though I would think the scripts inside the container would be
fairly different, as they are doing different things.  The scripts
inside the container aren't starting containers, for a start...

> > As a result, I'd suggest considering using sysvinit inside your
> > containers to do the work.
>
> That is a possibility.  But I feel like that's tantamount to saying
> "Gentoo doesn't have an answer for what you're wanting to do, so just
> use Sys V init scripts."  I don't like it.
>
> I like the idea of re-using standard OpenRC / NetifRC scripts inside the
> containers too.  Especially if the services don't conflict anywhere.  To
> me, this re-uses the existing Gentoo methodology in different contexts.

OpenRC/Netifrc are run by sysvinit in Gentoo, as I mention later on.
These two are not mutually exclusive.

> The more that I think about it, largely in response to emails in this
> thread, I believe that I want the same overall thing to create the
> network between the default / main / unnamed NetNS and the container, as
> well as likely re-using the OpenRC / NetifRC scripts to configure things
> inside the container.

Not sure how much of it would be re-use.  The scripts inside/outside
the container would likely have different roles.

> I think, and would be curious to have someone confirm or refute, that I
> could add configuration information to /etc/conf.d/net for the xyz123
> interface inside the container and use an /etc/init.d/net.xyz123 init
> script sym-linked to /etc/init.d/net.lo script.
>
> My host would not have net.xyz123 in any runlevel.  Certainly not boot
> or default.

Honestly, I wouldn't go sticking container init.d scripts inside the
host init.d.  I mean, I guess you could, but again, separation of
concerns and all that.  You're going to have to use a separate
/etc/runlevels, so why not just a whole separate /etc?

--
Rich

12