Merge remote-tracking branch 'upstream/master' into hardened-stdenv

This commit is contained in:
Robin Gloster 2016-04-18 13:00:40 +00:00
commit d020caa5b2
1369 changed files with 35810 additions and 10571 deletions

View file

@ -12,4 +12,21 @@ under the terms of [COPYING](../COPYING), which is an MIT-like license.
## Submitting changes
See the nixpkgs manual for details on how to [Submit changes to nixpkgs](http://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download-by-type/doc/manual#chap-submitting-changes).
* Format the commits in the following way:
`(pkg-name | service-name): (from -> to | init at version | refactor | etc)`
Examples:
* nginx: init at 2.0.1
* firefox: 3.0 -> 3.1.1
* hydra service: add bazBaz option
* nginx service: refactor config generation
* `meta.description` should:
* Be capitalized
* Not start with the package name
* Not have a dot at the end
See the nixpkgs manual for more details on how to [Submit changes to nixpkgs](http://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download-by-type/doc/manual#chap-submitting-changes).

View file

@ -1,4 +1,4 @@
###### Things done:
###### Things done
- [ ] Tested using sandboxing (`nix-build --option build-use-chroot true` or [nix.useChroot](http://nixos.org/nixos/manual/options.html#opt-nix.useChroot) on NixOS)
- Built on platform(s)
@ -9,13 +9,5 @@
- [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).
###### More
Fixes issue #<insert id>
cc @<maintainer>
---
_Please note, that points are not mandatory, but rather desired._

View file

@ -647,6 +647,30 @@ command, i.e. by running:
rm /nix/var/nix/manifests/*
rm /nix/var/nix/channel-cache/*
### How to use the Haste Haskell-to-Javascript transpiler
Open a shell with `haste-compiler` and `haste-cabal-install` (you don't actually need
`node`, but it can be useful to test stuff):
$ nix-shell -p "haskellPackages.ghcWithPackages (self: with self; [haste-cabal-install haste-compiler])" -p nodejs
You may not need the following step but if `haste-boot` fails to compile all the
packages it needs, this might do the trick
$ haste-cabal update
`haste-boot` builds a set of core libraries so that they can be used from Javascript
transpiled programs:
$ haste-boot
Transpile and run a "Hello world" program:
$ echo 'module Main where main = putStrLn "Hello world"' > hello-world.hs
$ hastec --onexec hello-world.hs
$ node hello-world.js
Hello world
### Builds on Darwin fail with `math.h` not found
Users of GHC on Darwin have occasionally reported that builds fail, because the

View file

@ -12,6 +12,7 @@
<xi:include href="introduction.xml" />
<xi:include href="quick-start.xml" />
<xi:include href="stdenv.xml" />
<xi:include href="multiple-output.xml" />
<xi:include href="configuration.xml" />
<xi:include href="functions.xml" />
<xi:include href="meta.xml" />

91
doc/multiple-output.xml Normal file
View file

@ -0,0 +1,91 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY ndash "&#x2013;"> <!-- @vcunat likes to use this one ;-) -->
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="chap-multiple-output">
<title>Multiple-output packages</title>
<section><title>Introduction</title>
<para>The Nix language allows a derivation to produce multiple outputs, which is similar to what is utilized by other Linux distribution packaging systems. The outputs reside in separate nix store paths, so they can be mostly handled independently of each other, including passing to build inputs, garbage collection or binary substitution. The exception is that building from source always produces all the outputs.</para>
<para>The main motivation is to save disk space by reducing runtime closure sizes; consequently also sizes of substituted binaries get reduced. Splitting can be used to have more granular runtime dependencies, for example the typical reduction is to split away development-only files, as those are typically not needed during runtime. As a result, closure sizes of many packages can get reduced to a half or even much less.</para>
<note><para>The reduction effects could be instead achieved by building the parts in completely separate derivations. That would often additionally reduce build-time closures, but it tends to be much harder to write such derivations, as build systems typically assume all parts are being built at once. This compromise approach of single source package producing multiple binary packages is also utilized often by rpm and deb.</para></note>
</section>
<section><title>Installing a split package</title>
<para>When installing a package via <varname>systemPackages</varname> or <command>nix-env</command> you have several options:</para>
<warning><para>Currently <command>nix-env</command> almost always installs all outputs until https://github.com/NixOS/nix/pull/815 gets merged.</para></warning>
<itemizedlist>
<listitem><para>You can install particular outputs explicitly, as each is available in the Nix language as an attribute of the package. The <varname>outputs</varname> attribute contains a list of output names.</para></listitem>
<listitem><para>You can let it use the default outputs. These are handled by <varname>meta.outputsToInstall</varname> attribute that contains a list of output names.</para>
<para>TODO: more about tweaking the attribute, etc.</para></listitem>
<listitem><para>NixOS provides configuration option <varname>environment.extraOutputsToInstall</varname> that allows adding extra outputs of <varname>environment.systemPackages</varname> atop the default ones. It's mainly meant for documentation and debug symbols, and it's also modified by specific options.</para>
<note><para>At this moment there is no similar configurability for packages installed by <command>nix-env</command>. You can still use approach from <xref linkend="sec-modify-via-packageOverrides" /> to override <varname>meta.outputsToInstall</varname> attributes, but that's a rather inconvenient way.</para></note>
</listitem>
</itemizedlist>
</section>
<section><title>Using a split package</title>
<para>In the Nix language the individual outputs can be reached explicitly as attributes, e.g. <varname>coreutils.info</varname>, but the typical case is just using packages as build inputs.</para>
<para>When a multiple-output derivation gets into a build input of another derivation, the first output is added (<varname>.dev</varname> by convention) and also <varname>propagatedBuildOutputs</varname> of that package which by default contain <varname>$outputBin</varname> and <varname>$outputLib</varname>. (See <xref linkend="multiple-output-file-type-groups" />.)</para>
</section>
<section><title>Writing a split derivation</title>
<para>Here you find how to write a derivation that produces multiple outputs.</para>
<para>In nixpkgs there is a framework supporting multiple-output derivations. It tries to cover most cases by default behavior. You can find the source separated in &lt;<filename>nixpkgs/pkgs/build-support/setup-hooks/multiple-outputs.sh</filename>&gt;; it's relatively well-readable. The whole machinery is triggered by defining the <varname>outputs</varname> attribute to contain the list of desired output names (strings).</para>
<programlisting>outputs = [ "dev" "out" "bin" "doc" ];</programlisting>
<para>Often such a single line is enough. For each output an equally named environment variable is passed to the builder and contains the path in nix store for that output. By convention, the first output should usually be <varname>dev</varname>; typically you also want to have the main <varname>out</varname> output, as it catches any files that didn't get elsewhere.</para>
<note><para>There is a special handling of the <varname>debug</varname> output, described at <xref linkend="stdenv-separateDebugInfo" />.</para></note>
<section xml:id="multiple-output-file-type-groups">
<title>File type groups</title>
<para>The support code currently recognizes some particular kinds of outputs and either instructs the build system of the package to put files into their desired outputs or it moves the files during the fixup phase. Each group of file types has an <varname>outputFoo</varname> variable specifying the output name where they should go. If that variable isn't defined by the derivation writer, it is guessed &ndash; a default output name is defined, falling back to other possibilities if the output isn't defined.</para>
<variablelist>
<varlistentry><term><varname>
$outputDev</varname></term><listitem><para>
is for development-only files. These include C(++) headers, pkg-config, cmake and aclocal files. They go to <varname>dev</varname> or <varname>out</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputBin</varname></term><listitem><para>
is meant for user-facing binaries, typically residing in bin/. They go to <varname>bin</varname> or <varname>out</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputLib</varname></term><listitem><para>
is meant for libraries, typically residing in <filename>lib/</filename> and <filename>libexec/</filename>. They go to <varname>lib</varname> or <varname>out</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputDoc</varname></term><listitem><para>
is for user documentation, typically residing in <filename>share/doc/</filename>. It goes to <varname>doc</varname> or <varname>out</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputDocdev</varname></term><listitem><para>
is for <emphasis>developer</emphasis> documentation. Currently we count gtk-doc and man3 pages in there. It goes to <varname>docdev</varname> or is removed (!) by default. This is because e.g. gtk-doc tends to be rather large and completely unused by nixpkgs users.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputMan</varname></term><listitem><para>
is for man pages (except for section 3). They go to <varname>man</varname> or <varname>doc</varname> or <varname>$outputBin</varname> by default.
</para></listitem></varlistentry>
<varlistentry><term><varname>
$outputInfo</varname></term><listitem><para>
is for info pages. They go to <varname>info</varname> or <varname>doc</varname> or <varname>$outputMan</varname> by default.
</para></listitem></varlistentry>
</variablelist>
</section>
<section><title>Common caveats</title>
<itemizedlist>
<listitem><para>Some configure scripts don't like some of the parameters passed by default by the framework, e.g. <literal>--docdir=/foo/bar</literal>. You can disable this by setting <literal>setOutputFlags = false;</literal>.</para></listitem>
<listitem><para>The outputs of a single derivation can retain references to each other, but note that circular references are not allowed. (And each strongly-connected component would act as a single output anyway.)</para></listitem>
<listitem><para>Most of split packages contain their core functionality in libraries. These libraries tend to refer to various kind of data that typically gets into <varname>out</varname>, e.g. locale strings, so there is often no advantage in separating the libraries into <varname>lib</varname>, as keeping them in <varname>out</varname> is easier.</para></listitem>
<listitem><para>Some packages have hidden assumptions on install paths, which complicates splitting.</para></listitem>
</itemizedlist>
</section>
</section><!--Writing a split derivation-->
</chapter>

View file

@ -956,7 +956,7 @@ following:
phase.</para></listitem>
</varlistentry>
<varlistentry>
<varlistentry xml:id="stdenv-separateDebugInfo">
<term><varname>separateDebugInfo</varname></term>
<listitem><para>If set to <literal>true</literal>, the standard
environment will enable debug information in C/C++ builds. After

View file

@ -438,6 +438,24 @@ rec {
overrideExisting = old: new:
old // listToAttrs (map (attr: nameValuePair attr (attrByPath [attr] old.${attr} new)) (attrNames old));
/* Try given attributes in order. If no attributes are found, return
attribute list itself.
Example:
tryAttrs ["a" "b"] { a = 1; b = 2; }
=> 1
tryAttrs ["a" "b"] { c = 3; }
=> { c = 3; }
*/
tryAttrs = allAttrs: set:
let tryAttrs_ = attrs:
if attrs == [] then set
else
(let h = head attrs; in
if hasAttr h set then getAttr h set
else tryAttrs_ (tail attrs));
in tryAttrs_ allAttrs;
/*** deprecated stuff ***/

View file

@ -129,7 +129,7 @@ rec {
};
outputsList = map outputToAttrListElement outputs;
in commonAttrs.${drv.outputName};
in commonAttrs // { outputUnspecified = true; };
/* Strip a derivation of all non-essential attributes, returning

View file

@ -81,6 +81,8 @@
copumpkin = "Dan Peebles <pumpkingod@gmail.com>";
coroa = "Jonas Hörsch <jonas@chaoflow.net>";
couchemar = "Andrey Pavlov <couchemar@yandex.ru>";
cransom = "Casey Ransom <cransom@hubns.net>";
CrystalGamma = "Jona Stubbe <nixos@crystalgamma.de>";
cstrahan = "Charles Strahan <charles.c.strahan@gmail.com>";
cwoac = "Oliver Matthews <oliver@codersoffortune.net>";
DamienCassou = "Damien Cassou <damien@cassou.me>";
@ -117,6 +119,7 @@
ertes = "Ertugrul Söylemez <ertesx@gmx.de>";
exi = "Reno Reckling <nixos@reckling.org>";
exlevan = "Alexey Levan <exlevan@gmail.com>";
expipiplus1 = "Joe Hermaszewski <nix@monoid.al>";
fadenb = "Tristan Helmich <tristan.helmich+nixos@gmail.com>";
falsifian = "James Cook <james.cook@utoronto.ca>";
flosse = "Markus Kohlhase <mail@markus-kohlhase.de>";
@ -137,7 +140,6 @@
garrison = "Jim Garrison <jim@garrison.cc>";
gavin = "Gavin Rogers <gavin@praxeology.co.uk>";
gebner = "Gabriel Ebner <gebner@gebner.org>";
gfxmonk = "Tim Cuthbertson <tim@gfxmonk.net>";
giogadi = "Luis G. Torres <lgtorres42@gmail.com>";
gleber = "Gleb Peregud <gleber.p@gmail.com>";
globin = "Robin Gloster <mail@glob.in>";
@ -148,7 +150,7 @@
havvy = "Ryan Scheel <ryan.havvy@gmail.com>";
hbunke = "Hendrik Bunke <bunke.hendrik@gmail.com>";
henrytill = "Henry Till <henrytill@gmail.com>";
hiberno = "Christian Lask <mail@elfsechsundzwanzig.de>";
hiberno = "Christian Lask <hiberno@hiberno.net>";
hinton = "Tom Hinton <t@larkery.com>";
hrdinka = "Christoph Hrdinka <c.nix@hrdinka.at>";
iand675 = "Ian Duncan <ian@iankduncan.com>";
@ -231,7 +233,9 @@
mirdhyn = "Merlin Gaillard <mirdhyn@gmail.com>";
modulistic = "Pablo Costa <modulistic@gmail.com>";
mog = "Matthew O'Gorman <mog-lists@rldn.net>";
moretea = "Maarten Hoogendoorn <maarten@moretea.nl>";
mornfall = "Petr Ročkai <me@mornfall.net>";
MostAwesomeDude = "Corbin Simpson <cds@corbinsimpson.com>";
MP2E = "Cray Elliott <MP2E@archlinux.us>";
msackman = "Matthew Sackman <matthew@wellquite.org>";
mschristiansen = "Mikkel Christiansen <mikkel@rheosystems.com>";
@ -239,6 +243,7 @@
mtreskin = "Max Treskin <zerthurd@gmail.com>";
mudri = "James Wood <lamudri@gmail.com>";
muflax = "Stefan Dorn <mail@muflax.com>";
myrl = "Myrl Hex <myrl.0xf@gmail.com>";
nathan-gs = "Nathan Bijnens <nathan@nathan.gs>";
nckx = "Tobias Geerinckx-Rice <tobias.geerinckx.rice@gmail.com>";
nequissimus = "Tim Steinbach <tim@nequissimus.com>";
@ -343,6 +348,7 @@
the-kenny = "Moritz Ulrich <moritz@tarn-vedra.de>";
theuni = "Christian Theune <ct@flyingcircus.io>";
thoughtpolice = "Austin Seipp <aseipp@pobox.com>";
timbertson = "Tim Cuthbertson <tim@gfxmonk.net>";
titanous = "Jonathan Rudenberg <jonathan@titanous.com>";
tohl = "Tomas Hlavaty <tom@logand.com>";
tokudan = "Daniel Frank <git@danielfrank.net>";
@ -366,6 +372,7 @@
vlstill = "Vladimír Štill <xstill@fi.muni.cz>";
vmandela = "Venkateswara Rao Mandela <venkat.mandela@gmail.com>";
vozz = "Oliver Hunt <oliver.huntuk@gmail.com>";
vrthra = "Rahul Gopinath <rahul@gopinath.org>";
wedens = "wedens <kirill.wedens@gmail.com>";
willtim = "Tim Philip Williams <tim.williams.public@gmail.com>";
winden = "Antonio Vargas Gonzalez <windenntw@gmail.com>";

View file

@ -88,6 +88,16 @@ rec {
makeSearchPath = subDir: packages:
concatStringsSep ":" (map (path: path + "/" + subDir) packages);
/* Construct a Unix-style search path, given trying outputs in order.
If no output is found, fallback to `.out` and then to the default.
Example:
makeSearchPathOutputs "bin" ["bin"] [ pkgs.openssl pkgs.zlib ]
=> "/nix/store/9rz8gxhzf8sw4kf2j2f1grr49w8zx5vj-openssl-1.0.1r-bin/bin:/nix/store/wwh7mhwh269sfjkm6k5665b5kgp7jrk2-zlib-1.2.8/bin"
*/
makeSearchPathOutputs = subDir: outputs: pkgs:
makeSearchPath subDir (map (pkg: if pkg.outputUnspecified or false then lib.tryAttrs (outputs ++ ["out"]) pkg else pkg) pkgs);
/* Construct a library search path (such as RPATH) containing the
libraries for a set of packages
@ -98,7 +108,9 @@ rec {
makeLibraryPath [ pkgs.openssl pkgs.zlib ]
=> "/nix/store/9rz8gxhzf8sw4kf2j2f1grr49w8zx5vj-openssl-1.0.1r/lib:/nix/store/wwh7mhwh269sfjkm6k5665b5kgp7jrk2-zlib-1.2.8/lib"
*/
makeLibraryPath = makeSearchPath "lib";
makeLibraryPath = pkgs: makeSearchPath "lib"
# try to guess the right output of each pkg
(map (pkg: if pkg.outputUnspecified or false then pkg.lib or (pkg.out or pkg) else pkg) pkgs);
/* Construct a binary search path (such as $PATH) containing the
binaries for a set of packages.
@ -107,7 +119,8 @@ rec {
makeBinPath ["/root" "/usr" "/usr/local"]
=> "/root/bin:/usr/bin:/usr/local/bin"
*/
makeBinPath = makeSearchPath "bin";
makeBinPath = pkgs: makeSearchPath "bin"
(map (pkg: if pkg.outputUnspecified or false then pkg.bin or (pkg.out or pkg) else pkg) pkgs);
/* Construct a perl search path (such as $PERL5LIB)
@ -119,7 +132,8 @@ rec {
makePerlPath [ pkgs.perlPackages.NetSMTP ]
=> "/nix/store/n0m1fk9c960d8wlrs62sncnadygqqc6y-perl-Net-SMTP-1.25/lib/perl5/site_perl"
*/
makePerlPath = makeSearchPath "lib/perl5/site_perl";
makePerlPath = pkgs: makeSearchPath "lib/perl5/site_perl"
(map (pkg: if pkg.outputUnspecified or false then pkg.lib or (pkg.out or pkg) else pkg) pkgs);
/* Dependening on the boolean `cond', return either the given string
or the empty string. Useful to contatenate against a bigger string.

View file

@ -14,12 +14,12 @@ let
operator = const [ ];
});
urls = map (drv: { url = head drv.urls; hash = drv.outputHash; type = drv.outputHashAlgo; }) fetchurlDependencies;
urls = map (drv: { url = head (drv.urls or [ drv.url ]); hash = drv.outputHash; type = drv.outputHashAlgo; }) fetchurlDependencies;
fetchurlDependencies =
filter
(drv: drv.outputHash or "" != "" && drv.outputHashMode or "flat" == "flat"
&& drv.postFetch or "" == "" && drv ? urls)
&& drv.postFetch or "" == "" && (drv ? url || drv ? urls))
dependencies;
dependencies = map (x: x.value) (genericClosure {

View file

@ -27,7 +27,9 @@ effect after you run <command>nixos-rebuild</command>.</para>
<!-- FIXME: auto-include NixOS module docs -->
<xi:include href="postgresql.xml" />
<xi:include href="gitlab.xml" />
<xi:include href="taskserver.xml" />
<xi:include href="acme.xml" />
<xi:include href="input-methods.xml" />
<!-- Apache; libvirtd virtualisation -->

View file

@ -44,7 +44,7 @@ let
echo "for hints about the offending path)."
exit 1
fi
${libxslt}/bin/xsltproc \
${libxslt.bin}/bin/xsltproc \
--stringparam revision '${revision}' \
-o $out ${./options-to-docbook.xsl} $optionsXML
'';
@ -57,7 +57,9 @@ let
chmod -R u+w .
cp ${../../modules/services/databases/postgresql.xml} configuration/postgresql.xml
cp ${../../modules/services/misc/gitlab.xml} configuration/gitlab.xml
cp ${../../modules/services/misc/taskserver/doc.xml} configuration/taskserver.xml
cp ${../../modules/security/acme.xml} configuration/acme.xml
cp ${../../modules/i18n/input-method/default.xml} configuration/input-methods.xml
ln -s ${optionsDocBook} options-db.xml
echo "${version}" > version
'';

View file

@ -157,10 +157,6 @@ $ nano /mnt/etc/nixos/configuration.nix
<command>nixos-generate-config</command> will figure out the
required modules.</para></note>
<para>Examples of real-world NixOS configuration files can be
found at <link
xlink:href="https://nixos.org/repos/nix/configurations/trunk/"/>.</para>
</listitem>
<listitem><para>Do the installation:

View file

@ -63,11 +63,11 @@ has the following highlights:</para>
<itemizedlist>
<listitem><para><literal>services/monitoring/longview.nix</literal></para></listitem>
<listitem><para><literal>hardware/video/webcam/facetimehd.nix</literal></para></listitem>
<listitem><para><literal>i18n/inputMethod/default.nix</literal></para></listitem>
<listitem><para><literal>i18n/inputMethod/fcitx.nix</literal></para></listitem>
<listitem><para><literal>i18n/inputMethod/ibus.nix</literal></para></listitem>
<listitem><para><literal>i18n/inputMethod/nabi.nix</literal></para></listitem>
<listitem><para><literal>i18n/inputMethod/uim.nix</literal></para></listitem>
<listitem><para><literal>i18n/input-method/default.nix</literal></para></listitem>
<listitem><para><literal>i18n/input-method/fcitx.nix</literal></para></listitem>
<listitem><para><literal>i18n/input-method/ibus.nix</literal></para></listitem>
<listitem><para><literal>i18n/input-method/nabi.nix</literal></para></listitem>
<listitem><para><literal>i18n/input-method/uim.nix</literal></para></listitem>
<listitem><para><literal>programs/fish.nix</literal></para></listitem>
<listitem><para><literal>security/acme.nix</literal></para></listitem>
<listitem><para><literal>security/audit.nix</literal></para></listitem>

View file

@ -543,7 +543,7 @@ sub waitForX {
retry sub {
my ($status, $out) = $self->execute("journalctl -b SYSLOG_IDENTIFIER=systemd | grep 'session opened'");
return 0 if $status != 0;
($status, $out) = $self->execute("xwininfo -root > /dev/null 2>&1");
($status, $out) = $self->execute("[ -e /tmp/.X11-unix/X0 ]");
return 1 if $status == 0;
}
});

View file

@ -38,7 +38,7 @@ with lib;
# environment.pathsToLink, and we can't have both.
#environment.pathsToLink = [ "/lib/debug/.build-id" ];
environment.outputsToLink =
environment.extraOutputsToInstall =
optional config.environment.enableDebugInfo "debug";
};

View file

@ -236,7 +236,7 @@ with lib;
# Versioned fontconfig > 2.10. Take shared fonts.conf from fontconfig.
# Otherwise specify only font directories.
environment.etc."fonts/${pkgs.fontconfig.configVersion}/fonts.conf".source =
"${pkgs.fontconfig}/etc/fonts/fonts.conf";
"${pkgs.fontconfig.out}/etc/fonts/fonts.conf";
environment.etc."fonts/${pkgs.fontconfig.configVersion}/conf.d/00-nixos.conf".text =
let

View file

@ -148,7 +148,7 @@ in
"protocols".source = pkgs.iana_etc + "/etc/protocols";
# /etc/rpc: RPC program numbers.
"rpc".source = pkgs.glibc + "/etc/rpc";
"rpc".source = pkgs.glibc.out + "/etc/rpc";
# /etc/hosts: Hostname-to-IP mappings.
"hosts".text =

View file

@ -26,7 +26,7 @@ let
# are built with PulseAudio support (like KDE).
clientConf = writeText "client.conf" ''
autospawn=${if nonSystemWide then "yes" else "no"}
${optionalString nonSystemWide "daemon-binary=${cfg.package}/bin/pulseaudio"}
${optionalString nonSystemWide "daemon-binary=${cfg.package.out}/bin/pulseaudio"}
'';
# Write an /etc/asound.conf that causes all ALSA applications to
@ -130,11 +130,11 @@ in {
source = clientConf;
};
hardware.pulseaudio.configFile = mkDefault "${cfg.package}/etc/pulse/default.pa";
hardware.pulseaudio.configFile = mkDefault "${cfg.package.out}/etc/pulse/default.pa";
}
(mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
environment.systemPackages = [ cfg.package.out ];
environment.etc = singleton {
target = "asound.conf";
@ -195,7 +195,7 @@ in {
environment.PULSE_RUNTIME_PATH = stateDir;
serviceConfig = {
Type = "notify";
ExecStart = "${cfg.package}/bin/pulseaudio --daemonize=no --log-level=${cfg.daemon.logLevel} --system -n --file=${cfg.configFile}";
ExecStart = "${cfg.package.out}/bin/pulseaudio --daemonize=no --log-level=${cfg.daemon.logLevel} --system -n --file=${cfg.configFile}";
Restart = "on-failure";
};
};

View file

@ -73,11 +73,11 @@ in
description = "List of directories to be symlinked in <filename>/run/current-system/sw</filename>.";
};
outputsToLink = mkOption {
extraOutputsToInstall = mkOption {
type = types.listOf types.str;
default = [];
example = [ "doc" ];
description = "List of package outputs to be symlinked into <filename>/run/current-system/sw</filename>.";
default = [ ];
example = [ "doc" "info" "docdev" ];
description = "List of additional package outputs to be symlinked into <filename>/run/current-system/sw</filename>.";
};
};
@ -123,9 +123,10 @@ in
system.path = pkgs.buildEnv {
name = "system-path";
paths = config.environment.systemPackages;
inherit (config.environment) pathsToLink outputsToLink;
inherit (config.environment) pathsToLink extraOutputsToInstall;
ignoreCollisions = true;
# !!! Hacky, should modularise.
# outputs TODO: note that the tools will often not be linked by default
postBuild =
''
if [ -x $out/bin/update-mime-database -a -w $out/share/mime ]; then

View file

@ -0,0 +1,131 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-services-input-methods">
<title>Input Methods</title>
<para>Input methods are an operating system component that allows any data, such
as keyboard strokes or mouse movements, to be received as input. In this way
users can enter characters and symbols not found on their input devices. Using
an input method is obligatory for any language that has more graphemes than
there are keys on the keyboard.</para>
<para>The following input methods are available in NixOS:</para>
<itemizedlist>
<listitem><para>IBus: The intelligent input bus.</para></listitem>
<listitem><para>Fcitx: A customizable lightweight input
method.</para></listitem>
<listitem><para>Nabi: A Korean input method based on XIM.</para></listitem>
<listitem><para>Uim: The universal input method, is a library with a XIM
bridge.</para></listitem>
</itemizedlist>
<section><title>IBus</title>
<para>IBus is an Intelligent Input Bus. It provides full featured and user
friendly input method user interface.</para>
<para>The following snippet can be used to configure IBus:</para>
<programlisting>
i18n.inputMethod = {
enabled = "ibus";
ibus.engines = with pkgs.ibus-engines; [ anthy hangul mozc ];
};
</programlisting>
<para><literal>i18n.inputMethod.ibus.engines</literal> is optional and can be
used to add extra IBus engines.</para>
<para>Available extra IBus engines are:</para>
<itemizedlist>
<listitem><para>Anthy (<literal>ibus-engines.anthy</literal>): Anthy is a
system for Japanese input method. It converts Hiragana text to Kana Kanji
mixed text.</para></listitem>
<listitem><para>Hangul (<literal>ibus-engines.hangul</literal>): Korean input
method.</para></listitem>
<listitem><para>m17n (<literal>ibus-engines.m17n</literal>): m17n is an input
method that uses input methods and corresponding icons in the m17n
database.</para></listitem>
<listitem><para>mozc (<literal>ibus-engines.mozc</literal>): A Japanese input
method from Google.</para></listitem>
<listitem><para>Table (<literal>ibus-engines.table</literal>): An input method
that load tables of input methods.</para></listitem>
<listitem><para>table-others (<literal>ibus-engines.table-others</literal>):
Various table-based input methods.</para></listitem>
</itemizedlist>
</section>
<section><title>Fcitx</title>
<para>Fcitx is an input method framework with extension support. It has three
built-in Input Method Engine, Pinyin, QuWei and Table-based input
methods.</para>
<para>The following snippet can be used to configure Fcitx:</para>
<programlisting>
i18n.inputMethod = {
enabled = "fcitx";
fcitx.engines = with pkgs.fcitx-engines; [ mozc hangul m17n ];
};
</programlisting>
<para><literal>i18n.inputMethod.fcitx.engines</literal> is optional and can be
used to add extra Fcitx engines.</para>
<para>Available extra Fcitx engines are:</para>
<itemizedlist>
<listitem><para>Anthy (<literal>fcitx-engines.anthy</literal>): Anthy is a
system for Japanese input method. It converts Hiragana text to Kana Kanji
mixed text.</para></listitem>
<listitem><para>Chewing (<literal>fcitx-engines.chewing</literal>): Chewing is
an intelligent Zhuyin input method. It is one of the most popular input
methods among Traditional Chinese Unix users.</para></listitem>
<listitem><para>Hangul (<literal>fcitx-engines.hangul</literal>): Korean input
method.</para></listitem>
<listitem><para>m17n (<literal>fcitx-engines.m17n</literal>): m17n is an input
method that uses input methods and corresponding icons in the m17n
database.</para></listitem>
<listitem><para>mozc (<literal>fcitx-engines.mozc</literal>): A Japanese input
method from Google.</para></listitem>
<listitem><para>table-others (<literal>fcitx-engines.table-others</literal>):
Various table-based input methods.</para></listitem>
</itemizedlist>
</section>
<section><title>Nabi</title>
<para>Nabi is an easy to use Korean X input method. It allows you to enter
phonetic Korean characters (hangul) and pictographic Korean characters
(hanja).</para>
<para>The following snippet can be used to configure Nabi:</para>
<programlisting>
i18n.inputMethod = {
enabled = "nabi";
};
</programlisting>
</section>
<section><title>Uim</title>
<para>Uim (short for "universal input method") is a multilingual input method
framework. Applications can use it through so-called bridges.</para>
<para>The following snippet can be used to configure uim:</para>
<programlisting>
i18n.inputMethod = {
enabled = "uim";
};
</programlisting>
<para>Note: The <literal>i18n.inputMethod.uim.toolbar</literal> option can be
used to choose uim toolbar.</para>
</section>
</chapter>

View file

@ -78,7 +78,7 @@ let cfg = config.system.autoUpgrade; in
HOME = "/root";
};
path = [ pkgs.gnutar pkgs.xz config.nix.package ];
path = [ pkgs.gnutar pkgs.xz.bin config.nix.package ];
script = ''
${config.system.build.nixos-rebuild}/bin/nixos-rebuild switch ${toString cfg.flags}

View file

@ -474,7 +474,7 @@ my $hwConfig = <<EOF;
boot.kernelModules = [$kernelModules ];
boot.extraModulePackages = [$modulePackages ];
$fsAndSwap
nix.maxJobs = $cpus;
nix.maxJobs = lib.mkDefault $cpus;
${\join "", (map { " $_\n" } (uniq @attrs))}}
EOF

View file

@ -47,6 +47,7 @@
#floppy = 18; # unused
#uucp = 19; # unused
#lp = 20; # unused
#proc = 21; # unused
pulseaudio = 22; # must match `pulseaudio' GID
gpsd = 23;
#cdrom = 24; # unused
@ -259,6 +260,9 @@
hydra-www = 236;
syncthing = 237;
mfi = 238;
caddy = 239;
taskd = 240;
factorio = 241;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -288,6 +292,7 @@
floppy = 18;
uucp = 19;
lp = 20;
proc = 21;
pulseaudio = 22; # must match `pulseaudio' UID
gpsd = 23;
cdrom = 24;
@ -489,6 +494,9 @@
radicale = 234;
syncthing = 237;
#mfi = 238; # unused
caddy = 239;
taskd = 240;
factorio = 241;
# When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal

View file

@ -88,7 +88,7 @@ in {
serviceConfig.PrivateNetwork = "yes";
serviceConfig.NoNewPrivileges = "yes";
serviceConfig.ReadOnlyDirectories = "/";
serviceConfig.ReadWriteDirectories = cfg.output;
serviceConfig.ReadWriteDirectories = dirOf cfg.output;
};
systemd.timers.update-locatedb = mkIf cfg.enable

View file

@ -41,11 +41,11 @@
./hardware/video/nvidia.nix
./hardware/video/ati.nix
./hardware/video/webcam/facetimehd.nix
./i18n/inputMethod/default.nix
./i18n/inputMethod/fcitx.nix
./i18n/inputMethod/ibus.nix
./i18n/inputMethod/nabi.nix
./i18n/inputMethod/uim.nix
./i18n/input-method/default.nix
./i18n/input-method/fcitx.nix
./i18n/input-method/ibus.nix
./i18n/input-method/nabi.nix
./i18n/input-method/uim.nix
./installer/tools/auto-upgrade.nix
./installer/tools/nixos-checkout.nix
./installer/tools/tools.nix
@ -90,6 +90,7 @@
./security/ca.nix
./security/duosec.nix
./security/grsecurity.nix
./security/hidepid.nix
./security/oath.nix
./security/pam.nix
./security/pam_usb.nix
@ -157,6 +158,7 @@
./services/desktops/gnome3/tracker.nix
./services/desktops/profile-sync-daemon.nix
./services/desktops/telepathy.nix
./services/games/factorio.nix
./services/games/ghost-one.nix
./services/games/minecraft-server.nix
./services/games/minetest-server.nix
@ -249,6 +251,7 @@
./services/misc/sundtek.nix
./services/misc/svnserve.nix
./services/misc/synergy.nix
./services/misc/taskserver
./services/misc/uhub.nix
./services/misc/zookeeper.nix
./services/monitoring/apcupsd.nix
@ -328,7 +331,7 @@
./services/networking/hostapd.nix
./services/networking/i2pd.nix
./services/networking/i2p.nix
./services/networking/iodined.nix
./services/networking/iodine.nix
./services/networking/ircd-hybrid/default.nix
./services/networking/kippo.nix
./services/networking/lambdabot.nix
@ -425,6 +428,7 @@
./services/system/nscd.nix
./services/system/uptimed.nix
./services/torrent/deluge.nix
./services/torrent/flexget.nix
./services/torrent/peerflix.nix
./services/torrent/transmission.nix
./services/ttys/agetty.nix
@ -432,6 +436,7 @@
./services/ttys/kmscon.nix
./services/web-apps/pump.io.nix
./services/web-servers/apache-httpd/default.nix
./services/web-servers/caddy.nix
./services/web-servers/fcgiwrap.nix
./services/web-servers/jboss/default.nix
./services/web-servers/lighttpd/cgit.nix

View file

@ -35,7 +35,7 @@
# Tools to create / manipulate filesystems.
pkgs.ntfsprogs # for resizing NTFS partitions
pkgs.dosfstools
pkgs.xfsprogs
pkgs.xfsprogs.bin
pkgs.jfsutils
pkgs.f2fs-tools

View file

@ -56,7 +56,7 @@ in
*/
shellAliases = mkOption {
default = config.environment.shellAliases // { which = "type -P"; };
default = config.environment.shellAliases;
description = ''
Set of aliases for bash shell. See <option>environment.shellAliases</option>
for an option format description.

View file

@ -101,6 +101,9 @@ in
end
'';
# include programs that bring their own completions
environment.pathsToLink = [ "/share/fish/vendor_completions.d" ];
environment.systemPackages = [ pkgs.fish ];
environment.shells = [

View file

@ -23,7 +23,7 @@ with lib;
environment.pathsToLink = [ "/share/man" ];
environment.outputsToLink = [ "man" ];
environment.extraOutputsToInstall = [ "man" ];
};

View file

@ -89,8 +89,8 @@ in
nameValuePair "xfs_quota-${name}" {
description = "Setup xfs_quota for project ${name}";
script = ''
${pkgs.xfsprogs}/bin/xfs_quota -x -c 'project -s ${name}' ${opts.fileSystem}
${pkgs.xfsprogs}/bin/xfs_quota -x -c 'limit -p ${limitOptions opts} ${name}' ${opts.fileSystem}
${pkgs.xfsprogs.bin}/bin/xfs_quota -x -c 'project -s ${name}' ${opts.fileSystem}
${pkgs.xfsprogs.bin}/bin/xfs_quota -x -c 'limit -p ${limitOptions opts} ${name}' ${opts.fileSystem}
'';
wantedBy = [ "multi-user.target" ];

View file

@ -101,6 +101,13 @@ with lib;
# Enlightenment
(mkRenamedOptionModule [ "services" "xserver" "desktopManager" "e19" "enable" ] [ "services" "xserver" "desktopManager" "enlightenment" "enable" ])
# Iodine
(mkRenamedOptionModule [ "services" "iodined" "enable" ] [ "services" "iodine" "server" "enable" ])
(mkRenamedOptionModule [ "services" "iodined" "domain" ] [ "services" "iodine" "server" "domain" ])
(mkRenamedOptionModule [ "services" "iodined" "ip" ] [ "services" "iodine" "server" "ip" ])
(mkRenamedOptionModule [ "services" "iodined" "extraConfig" ] [ "services" "iodine" "server" "extraConfig" ])
(mkRemovedOptionModule [ "services" "iodined" "client" ])
# Options that are obsolete and have no replacement.
(mkRemovedOptionModule [ "boot" "initrd" "luks" "enable" ])
(mkRemovedOptionModule [ "programs" "bash" "enable" ])

View file

@ -152,7 +152,7 @@ in
in nameValuePair
("acme-${cert}")
({
description = "ACME cert renewal for ${cert} using simp_le";
description = "Renew ACME Certificate for ${cert}";
after = [ "network.target" ];
serviceConfig = {
Type = "oneshot";
@ -192,7 +192,7 @@ in
systemd.timers = flip mapAttrs' cfg.certs (cert: data: nameValuePair
("acme-${cert}")
({
description = "timer for ACME cert renewal of ${cert}";
description = "Renew ACME Certificate for ${cert}";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = cfg.renewInterval;

View file

@ -28,9 +28,9 @@ with lib;
capability setuid,
network inet raw,
${pkgs.glibc}/lib/*.so mr,
${pkgs.libcap}/lib/libcap.so* mr,
${pkgs.attr}/lib/libattr.so* mr,
${pkgs.glibc.out}/lib/*.so mr,
${pkgs.libcap.out}/lib/libcap.so* mr,
${pkgs.attr.out}/lib/libattr.so* mr,
${pkgs.iputils}/bin/ping mixr,
/var/setuid-wrappers/ping.real r,

View file

@ -0,0 +1,42 @@
{ config, pkgs, lib, ... }:
with lib;
{
options = {
security.hideProcessInformation = mkEnableOption "" // { description = ''
Restrict access to process information to the owning user. Enabling
this option implies, among other things, that command-line arguments
remain private. This option is recommended for most systems, unless
there's a legitimate reason for allowing unprivileged users to inspect
the process information of other users.
Members of the group "proc" are exempt from process information hiding.
To allow a service to run without process information hiding, add "proc"
to its supplementary groups via
<option>systemd.services.&lt;name?&gt;.serviceConfig.SupplementaryGroups</option>.
''; };
};
config = mkIf config.security.hideProcessInformation {
users.groups.proc.gid = config.ids.gids.proc;
systemd.services.hidepid = {
wantedBy = [ "local-fs.target" ];
after = [ "systemd-remount-fs.service" ];
before = [ "local-fs-pre.target" "local-fs.target" "shutdown.target" ];
wants = [ "local-fs-pre.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = ''${pkgs.utillinux}/bin/mount -o remount,hidepid=2,gid=${toString config.ids.gids.proc} /proc'';
ExecStop = ''${pkgs.utillinux}/bin/mount -o remount,hidepid=0,gid=0 /proc'';
};
unitConfig = {
DefaultDependencies = false;
Conflicts = "shutdown.target";
};
};
};
}

View file

@ -59,9 +59,9 @@ in
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.polkit ];
environment.systemPackages = [ pkgs.polkit.bin pkgs.polkit.out ];
systemd.packages = [ pkgs.polkit ];
systemd.packages = [ pkgs.polkit.out ];
systemd.services.polkit.restartTriggers = [ config.system.path ];
systemd.services.polkit.unitConfig.X-StopIfChanged = false;
@ -79,7 +79,7 @@ in
${cfg.extraConfig}
''; #TODO: validation on compilation (at least against typos)
services.dbus.packages = [ pkgs.polkit ];
services.dbus.packages = [ pkgs.polkit.out ];
security.pam.services.polkit-1 = {};
@ -90,7 +90,7 @@ in
owner = "root";
group = "root";
setuid = true;
source = "${pkgs.polkit}/lib/polkit-1/polkit-agent-helper-1";
source = "${pkgs.polkit.out}/lib/polkit-1/polkit-agent-helper-1";
}
];

View file

@ -8,12 +8,12 @@ let
setuidWrapper = pkgs.stdenv.mkDerivation {
name = "setuid-wrapper";
buildCommand = ''
unpackPhase = "true";
installPhase = ''
mkdir -p $out/bin
cp ${./setuid-wrapper.c} setuid-wrapper.c
gcc -Wall -O2 -DWRAPPER_DIR=\"${wrapperDir}\" \
setuid-wrapper.c -o $out/bin/setuid-wrapper
strip -S $out/bin/setuid-wrapper
'';
};

View file

@ -161,11 +161,11 @@ in {
'';
postStart = ''
until ${pkgs.curl}/bin/curl -s -L ${cfg.listenAddress}:${toString cfg.port}${cfg.prefix} ; do
until ${pkgs.curl.bin}/bin/curl -s -L ${cfg.listenAddress}:${toString cfg.port}${cfg.prefix} ; do
sleep 10
done
while true ; do
index=`${pkgs.curl}/bin/curl -s -L ${cfg.listenAddress}:${toString cfg.port}${cfg.prefix}`
index=`${pkgs.curl.bin}/bin/curl -s -L ${cfg.listenAddress}:${toString cfg.port}${cfg.prefix}`
if [[ !("$index" =~ 'Please wait while Jenkins is restarting' ||
"$index" =~ 'Please wait while Jenkins is getting ready to work') ]]; then
exit 0

View file

@ -87,7 +87,7 @@ in
mkdir -p ${cfg.dataDir}
chown -R ${cfg.user}:${cfg.group} ${cfg.dataDir}
'';
serviceConfig.ExecStart = "${openldap}/libexec/slapd -u ${cfg.user} -g ${cfg.group} -d 0 -f ${configFile}";
serviceConfig.ExecStart = "${openldap.out}/libexec/slapd -u ${cfg.user} -g ${cfg.group} -d 0 -f ${configFile}";
};
users.extraUsers.openldap =

View file

@ -37,7 +37,7 @@ in
services.dbus.packages = [ gnome3.gvfs ];
services.udev.packages = [ pkgs.libmtp ];
services.udev.packages = [ pkgs.libmtp.bin ];
};

View file

@ -0,0 +1,102 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.factorio;
name = "Factorio";
stateDir = "/var/lib/factorio";
configFile = pkgs.writeText "factorio.conf" ''
use-system-read-write-data-directories=true
[path]
read-data=${pkgs.factorio-headless}/share/factorio/data
write-data=${stateDir}
'';
in
{
options = {
services.factorio = {
enable = mkEnableOption name;
port = mkOption {
type = types.int;
default = 34197;
description = ''
The port to which the service should bind.
This option will also open up the UDP port in the firewall configuration.
'';
};
saveName = mkOption {
type = types.string;
default = "default";
description = ''
The name of the savegame that will be used by the server.
When not present in ${stateDir}/saves, it will be generated before starting the service.
'';
};
# TODO Add more individual settings as nixos-options?
# TODO XXX The server tries to copy a newly created config file over the old one
# on shutdown, but fails, because it's in the nix store. When is this needed?
# Can an admin set options in-game and expect to have them persisted?
configFile = mkOption {
type = types.path;
default = configFile;
defaultText = "configFile";
description = ''
The server's configuration file.
The default file generated by this module contains lines essential to
the server's operation. Use its contents as a basis for any
customizations.
'';
};
};
};
config = mkIf cfg.enable {
users = {
users.factorio = {
uid = config.ids.uids.factorio;
description = "Factorio server user";
group = "factorio";
home = stateDir;
createHome = true;
};
groups.factorio = {
gid = config.ids.gids.factorio;
};
};
systemd.services.factorio = {
description = "Factorio headless server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
preStart = ''
test -e ${stateDir}/saves/${cfg.saveName}.zip || ${pkgs.factorio-headless}/bin/factorio \
--config=${cfg.configFile} \
--create=${cfg.saveName}
'';
serviceConfig = {
User = "factorio";
Group = "factorio";
Restart = "always";
KillSignal = "SIGINT";
WorkingDirectory = stateDir;
PrivateTmp = true;
UMask = "0007";
ExecStart = toString [
"${pkgs.factorio-headless}/bin/factorio"
"--config=${cfg.configFile}"
"--port=${toString cfg.port}"
"--start-server=${cfg.saveName}"
];
};
};
networking.firewall.allowedUDPPorts = [ cfg.port ];
};
}

View file

@ -72,7 +72,7 @@ let
run_progs=$(grep -v '^[[:space:]]*#' $out/* | grep 'RUN+="[^/$]' |
sed -e 's/.*RUN+="\([^ "]*\)[ "].*/\1/' | uniq)
for i in $import_progs $run_progs; do
if [[ ! -x ${pkgs.udev}/lib/udev/$i && ! $i =~ socket:.* ]]; then
if [[ ! -x ${udev}/lib/udev/$i && ! $i =~ socket:.* ]]; then
echo "FAIL"
echo "$i is called in udev rules but not installed by udev"
exit 1

View file

@ -51,7 +51,7 @@ in
systemd.services.upower =
{ description = "Power Management Daemon";
path = [ pkgs.glib ]; # needed for gdbus
path = [ pkgs.glib.out ]; # needed for gdbus
serviceConfig =
{ Type = "dbus";
BusName = "org.freedesktop.UPower";

View file

@ -65,7 +65,7 @@ in {
};
postStart = ''
until ${pkgs.curl}/bin/curl -s -o /dev/null 'http://${cfg.listenAddress}:${toString cfg.port}/'; do
until ${pkgs.curl.bin}/bin/curl -s -o /dev/null 'http://${cfg.listenAddress}:${toString cfg.port}/'; do
sleep 1;
done
'';

View file

@ -228,7 +228,7 @@ in {
'')
+ optionalString (service == "sql" && sql.driver == "sqlite") ''
cat "${gammuPackage}/${initDBDir}/sqlite.sql" \
| ${pkgs.sqlite}/bin/sqlite3 ${sql.database}
| ${pkgs.sqlite.bin}/bin/sqlite3 ${sql.database}
''
+ (let execPsql = extraArgs: concatStringsSep " " [
(optionalString (sql.password != null) "PGPASSWORD=${sql.password}")

View file

@ -358,7 +358,7 @@ in
systemd.sockets.nix-daemon.wantedBy = [ "sockets.target" ];
systemd.services.nix-daemon =
{ path = [ nix pkgs.openssl pkgs.utillinux config.programs.ssh.package ]
{ path = [ nix pkgs.openssl.bin pkgs.utillinux config.programs.ssh.package ]
++ optionals cfg.distributedBuilds [ pkgs.gzip ];
environment = cfg.envVars

View file

@ -10,7 +10,7 @@ let
plugins.cura.cura_engine = "${pkgs.curaengine}/bin/CuraEngine";
server.host = cfg.host;
server.port = cfg.port;
webcam.ffmpeg = "${pkgs.ffmpeg}/bin/ffmpeg";
webcam.ffmpeg = "${pkgs.ffmpeg.bin}/bin/ffmpeg";
};
fullConfig = recursiveUpdate cfg.extraConfig baseConfig;
@ -102,7 +102,7 @@ in
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = [ pluginsEnv ];
environment.PYTHONPATH = makeSearchPath pkgs.python.sitePackages [ pluginsEnv ];
environment.PYTHONPATH = makeSearchPathOutputs pkgs.python.sitePackages ["lib"] [ pluginsEnv ];
preStart = ''
mkdir -p "${cfg.stateDir}"

View file

@ -128,6 +128,7 @@ in
Group = cfg.group;
PermissionsStartOnly = "true";
ExecStart = "/bin/sh -c '${cfg.package}/usr/lib/plexmediaserver/Plex\\ Media\\ Server'";
Restart = "on-failure";
};
environment = {
PLEX_MEDIA_SERVER_APPLICATION_SUPPORT_DIR=cfg.dataDir;

View file

@ -97,7 +97,7 @@ in
transcoders = mkOption {
type = types.listOf types.path;
default = [ "${pkgs.ffmpeg}/bin/ffmpeg" ];
default = [ "${pkgs.ffmpeg.bin}/bin/ffmpeg" ];
description = ''
List of paths to transcoder executables that should be accessible
from Subsonic. Symlinks will be created to each executable inside

View file

@ -38,7 +38,7 @@ in
after = [ "network-interfaces.target" ];
wantedBy = [ "multi-user.target" ];
preStart = "mkdir -p ${cfg.svnBaseDir}";
script = "${pkgs.subversion}/bin/svnserve -r ${cfg.svnBaseDir} -d --foreground --pid-file=/var/run/svnserve.pid";
script = "${pkgs.subversion.out}/bin/svnserve -r ${cfg.svnBaseDir} -d --foreground --pid-file=/var/run/svnserve.pid";
};
};
}

View file

@ -0,0 +1,541 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.taskserver;
taskd = "${pkgs.taskserver}/bin/taskd";
mkVal = val:
if val == true then "true"
else if val == false then "false"
else if isList val then concatStringsSep ", " val
else toString val;
mkConfLine = key: val: let
result = "${key} = ${mkVal val}";
in optionalString (val != null && val != []) result;
mkManualPkiOption = desc: mkOption {
type = types.nullOr types.path;
default = null;
description = desc + ''
<note><para>
Setting this option will prevent automatic CA creation and handling.
</para></note>
'';
};
manualPkiOptions = {
ca.cert = mkManualPkiOption ''
Fully qualified path to the CA certificate.
'';
server.cert = mkManualPkiOption ''
Fully qualified path to the server certificate.
'';
server.crl = mkManualPkiOption ''
Fully qualified path to the server certificate revocation list.
'';
server.key = mkManualPkiOption ''
Fully qualified path to the server key.
'';
};
mkAutoDesc = preamble: ''
${preamble}
<note><para>
This option is for the automatically handled CA and will be ignored if any
of the <option>services.taskserver.pki.manual.*</option> options are set.
</para></note>
'';
mkExpireOption = desc: mkOption {
type = types.nullOr types.int;
default = null;
example = 365;
apply = val: if isNull val then -1 else val;
description = mkAutoDesc ''
The expiration time of ${desc} in days or <literal>null</literal> for no
expiration time.
'';
};
autoPkiOptions = {
bits = mkOption {
type = types.int;
default = 4096;
example = 2048;
description = mkAutoDesc "The bit size for generated keys.";
};
expiration = {
ca = mkExpireOption "the CA certificate";
server = mkExpireOption "the server certificate";
client = mkExpireOption "client certificates";
crl = mkExpireOption "the certificate revocation list (CRL)";
};
};
needToCreateCA = let
notFound = path: let
dotted = concatStringsSep "." path;
in throw "Can't find option definitions for path `${dotted}'.";
findPkiDefinitions = path: attrs: let
mkSublist = key: val: let
newPath = path ++ singleton key;
in if isOption val
then attrByPath newPath (notFound newPath) cfg.pki.manual
else findPkiDefinitions newPath val;
in flatten (mapAttrsToList mkSublist attrs);
in all isNull (findPkiDefinitions [] manualPkiOptions);
configFile = pkgs.writeText "taskdrc" (''
# systemd related
daemon = false
log = -
# logging
${mkConfLine "debug" cfg.debug}
${mkConfLine "ip.log" cfg.ipLog}
# general
${mkConfLine "ciphers" cfg.ciphers}
${mkConfLine "confirmation" cfg.confirmation}
${mkConfLine "extensions" cfg.extensions}
${mkConfLine "queue.size" cfg.queueSize}
${mkConfLine "request.limit" cfg.requestLimit}
# client
${mkConfLine "client.allow" cfg.allowedClientIDs}
${mkConfLine "client.deny" cfg.disallowedClientIDs}
# server
server = ${cfg.listenHost}:${toString cfg.listenPort}
${mkConfLine "trust" cfg.trust}
# PKI options
${if needToCreateCA then ''
ca.cert = ${cfg.dataDir}/keys/ca.cert
server.cert = ${cfg.dataDir}/keys/server.cert
server.key = ${cfg.dataDir}/keys/server.key
server.crl = ${cfg.dataDir}/keys/server.crl
'' else ''
ca.cert = ${cfg.pki.ca.cert}
server.cert = ${cfg.pki.server.cert}
server.key = ${cfg.pki.server.key}
server.crl = ${cfg.pki.server.crl}
''}
'' + cfg.extraConfig);
orgOptions = { name, ... }: {
options.users = mkOption {
type = types.uniq (types.listOf types.str);
default = [];
example = [ "alice" "bob" ];
description = ''
A list of user names that belong to the organization.
'';
};
options.groups = mkOption {
type = types.listOf types.str;
default = [];
example = [ "workers" "slackers" ];
description = ''
A list of group names that belong to the organization.
'';
};
};
mkShellStr = val: "'${replaceStrings ["'"] ["'\\''"] val}'";
certtool = "${pkgs.gnutls.bin}/bin/certtool";
nixos-taskserver = pkgs.buildPythonPackage {
name = "nixos-taskserver";
namePrefix = "";
src = pkgs.runCommand "nixos-taskserver-src" {} ''
mkdir -p "$out"
cat "${pkgs.substituteAll {
src = ./helper-tool.py;
inherit taskd certtool;
inherit (cfg) dataDir user group fqdn;
certBits = cfg.pki.auto.bits;
clientExpiration = cfg.pki.auto.expiration.client;
crlExpiration = cfg.pki.auto.expiration.crl;
}}" > "$out/main.py"
cat > "$out/setup.py" <<EOF
from setuptools import setup
setup(name="nixos-taskserver",
py_modules=["main"],
install_requires=["Click"],
entry_points="[console_scripts]\\nnixos-taskserver=main:cli")
EOF
'';
propagatedBuildInputs = [ pkgs.pythonPackages.click ];
};
in {
options = {
services.taskserver = {
enable = mkOption {
type = types.bool;
default = false;
example = true;
description = ''
Whether to enable the Taskwarrior server.
More instructions about NixOS in conjuction with Taskserver can be
found in the NixOS manual at
<olink targetdoc="manual" targetptr="module-taskserver"/>.
'';
};
user = mkOption {
type = types.str;
default = "taskd";
description = "User for Taskserver.";
};
group = mkOption {
type = types.str;
default = "taskd";
description = "Group for Taskserver.";
};
dataDir = mkOption {
type = types.path;
default = "/var/lib/taskserver";
description = "Data directory for Taskserver.";
};
ciphers = mkOption {
type = types.nullOr (types.separatedString ":");
default = null;
example = "NORMAL:-VERS-SSL3.0";
description = let
url = "https://gnutls.org/manual/html_node/Priority-Strings.html";
in ''
List of GnuTLS ciphers to use. See the GnuTLS documentation about
priority strings at <link xlink:href="${url}"/> for full details.
'';
};
organisations = mkOption {
type = types.attrsOf (types.submodule orgOptions);
default = {};
example.myShinyOrganisation.users = [ "alice" "bob" ];
example.myShinyOrganisation.groups = [ "staff" "outsiders" ];
example.yetAnotherOrganisation.users = [ "foo" "bar" ];
description = ''
An attribute set where the keys name the organisation and the values
are a set of lists of <option>users</option> and
<option>groups</option>.
'';
};
confirmation = mkOption {
type = types.bool;
default = true;
description = ''
Determines whether certain commands are confirmed.
'';
};
debug = mkOption {
type = types.bool;
default = false;
description = ''
Logs debugging information.
'';
};
extensions = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
Fully qualified path of the Taskserver extension scripts.
Currently there are none.
'';
};
ipLog = mkOption {
type = types.bool;
default = false;
description = ''
Logs the IP addresses of incoming requests.
'';
};
queueSize = mkOption {
type = types.int;
default = 10;
description = ''
Size of the connection backlog, see <citerefentry>
<refentrytitle>listen</refentrytitle>
<manvolnum>2</manvolnum>
</citerefentry>.
'';
};
requestLimit = mkOption {
type = types.int;
default = 1048576;
description = ''
Size limit of incoming requests, in bytes.
'';
};
allowedClientIDs = mkOption {
type = with types; loeOf (either (enum ["all" "none"]) str);
default = [];
example = [ "[Tt]ask [2-9]+" ];
description = ''
A list of regular expressions that are matched against the reported
client id (such as <literal>task 2.3.0</literal>).
The values <literal>all</literal> or <literal>none</literal> have
special meaning. Overidden by any entry in the option
<option>services.taskserver.disallowedClientIDs</option>.
'';
};
disallowedClientIDs = mkOption {
type = with types; loeOf (either (enum ["all" "none"]) str);
default = [];
example = [ "[Tt]ask [2-9]+" ];
description = ''
A list of regular expressions that are matched against the reported
client id (such as <literal>task 2.3.0</literal>).
The values <literal>all</literal> or <literal>none</literal> have
special meaning. Any entry here overrides those in
<option>services.taskserver.allowedClientIDs</option>.
'';
};
listenHost = mkOption {
type = types.str;
default = "localhost";
example = "::";
description = ''
The address (IPv4, IPv6 or DNS) to listen on.
If the value is something else than <literal>localhost</literal> the
port defined by <option>listenPort</option> is automatically added to
<option>networking.firewall.allowedTCPPorts</option>.
'';
};
listenPort = mkOption {
type = types.int;
default = 53589;
description = ''
Port number of the Taskserver.
'';
};
fqdn = mkOption {
type = types.str;
default = "localhost";
description = ''
The fully qualified domain name of this server, which is also used
as the common name in the certificates.
'';
};
trust = mkOption {
type = types.enum [ "allow all" "strict" ];
default = "strict";
description = ''
Determines how client certificates are validated.
The value <literal>allow all</literal> performs no client
certificate validation. This is not recommended. The value
<literal>strict</literal> causes the client certificate to be
validated against a CA.
'';
};
pki.manual = manualPkiOptions;
pki.auto = autoPkiOptions;
extraConfig = mkOption {
type = types.lines;
default = "";
example = "client.cert = /tmp/debugging.cert";
description = ''
Extra lines to append to the taskdrc configuration file.
'';
};
};
};
config = mkMerge [
(mkIf cfg.enable {
environment.systemPackages = [ pkgs.taskserver nixos-taskserver ];
users.users = optional (cfg.user == "taskd") {
name = "taskd";
uid = config.ids.uids.taskd;
description = "Taskserver user";
group = cfg.group;
};
users.groups = optional (cfg.group == "taskd") {
name = "taskd";
gid = config.ids.gids.taskd;
};
systemd.services.taskserver-init = {
wantedBy = [ "taskserver.service" ];
before = [ "taskserver.service" ];
description = "Initialize Taskserver Data Directory";
preStart = ''
mkdir -m 0770 -p "${cfg.dataDir}"
chown "${cfg.user}:${cfg.group}" "${cfg.dataDir}"
'';
script = ''
${taskd} init
echo "include ${configFile}" > "${cfg.dataDir}/config"
touch "${cfg.dataDir}/.is_initialized"
'';
environment.TASKDDATA = cfg.dataDir;
unitConfig.ConditionPathExists = "!${cfg.dataDir}/.is_initialized";
serviceConfig.Type = "oneshot";
serviceConfig.User = cfg.user;
serviceConfig.Group = cfg.group;
serviceConfig.PermissionsStartOnly = true;
serviceConfig.PrivateNetwork = true;
serviceConfig.PrivateDevices = true;
serviceConfig.PrivateTmp = true;
};
systemd.services.taskserver = {
description = "Taskwarrior Server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment.TASKDDATA = cfg.dataDir;
preStart = let
jsonOrgs = builtins.toJSON cfg.organisations;
jsonFile = pkgs.writeText "orgs.json" jsonOrgs;
helperTool = "${nixos-taskserver}/bin/nixos-taskserver";
in "${helperTool} process-json '${jsonFile}'";
serviceConfig = {
ExecStart = "@${taskd} taskd server";
ExecReload = "${pkgs.coreutils}/bin/kill -USR1 $MAINPID";
Restart = "on-failure";
PermissionsStartOnly = true;
PrivateTmp = true;
PrivateDevices = true;
User = cfg.user;
Group = cfg.group;
};
};
})
(mkIf needToCreateCA {
systemd.services.taskserver-ca = {
wantedBy = [ "taskserver.service" ];
after = [ "taskserver-init.service" ];
before = [ "taskserver.service" ];
description = "Initialize CA for TaskServer";
serviceConfig.Type = "oneshot";
serviceConfig.UMask = "0077";
serviceConfig.PrivateNetwork = true;
serviceConfig.PrivateTmp = true;
script = ''
silent_certtool() {
if ! output="$("${certtool}" "$@" 2>&1)"; then
echo "GNUTLS certtool invocation failed with output:" >&2
echo "$output" >&2
fi
}
mkdir -m 0700 -p "${cfg.dataDir}/keys"
chown root:root "${cfg.dataDir}/keys"
if [ ! -e "${cfg.dataDir}/keys/ca.key" ]; then
silent_certtool -p \
--bits ${toString cfg.pki.auto.bits} \
--outfile "${cfg.dataDir}/keys/ca.key"
silent_certtool -s \
--template "${pkgs.writeText "taskserver-ca.template" ''
cn = ${cfg.fqdn}
expiration_days = ${toString cfg.pki.auto.expiration.ca}
cert_signing_key
ca
''}" \
--load-privkey "${cfg.dataDir}/keys/ca.key" \
--outfile "${cfg.dataDir}/keys/ca.cert"
chgrp "${cfg.group}" "${cfg.dataDir}/keys/ca.cert"
chmod g+r "${cfg.dataDir}/keys/ca.cert"
fi
if [ ! -e "${cfg.dataDir}/keys/server.key" ]; then
silent_certtool -p \
--bits ${toString cfg.pki.auto.bits} \
--outfile "${cfg.dataDir}/keys/server.key"
silent_certtool -c \
--template "${pkgs.writeText "taskserver-cert.template" ''
cn = ${cfg.fqdn}
expiration_days = ${toString cfg.pki.auto.expiration.server}
tls_www_server
encryption_key
signing_key
''}" \
--load-ca-privkey "${cfg.dataDir}/keys/ca.key" \
--load-ca-certificate "${cfg.dataDir}/keys/ca.cert" \
--load-privkey "${cfg.dataDir}/keys/server.key" \
--outfile "${cfg.dataDir}/keys/server.cert"
chgrp "${cfg.group}" \
"${cfg.dataDir}/keys/server.key" \
"${cfg.dataDir}/keys/server.cert"
chmod g+r \
"${cfg.dataDir}/keys/server.key" \
"${cfg.dataDir}/keys/server.cert"
fi
if [ ! -e "${cfg.dataDir}/keys/server.crl" ]; then
silent_certtool --generate-crl \
--template "${pkgs.writeText "taskserver-crl.template" ''
expiration_days = ${toString cfg.pki.auto.expiration.crl}
''}" \
--load-ca-privkey "${cfg.dataDir}/keys/ca.key" \
--load-ca-certificate "${cfg.dataDir}/keys/ca.cert" \
--outfile "${cfg.dataDir}/keys/server.crl"
chgrp "${cfg.group}" "${cfg.dataDir}/keys/server.crl"
chmod g+r "${cfg.dataDir}/keys/server.crl"
fi
chmod go+x "${cfg.dataDir}/keys"
'';
};
})
(mkIf (cfg.listenHost != "localhost") {
networking.firewall.allowedTCPPorts = [ cfg.listenPort ];
})
{ meta.doc = ./taskserver.xml; }
];
}

View file

@ -0,0 +1,144 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="module-taskserver">
<title>Taskserver</title>
<para>
Taskserver is the server component of
<link xlink:href="https://taskwarrior.org/">Taskwarrior</link>, a free and
open source todo list application.
</para>
<para>
<emphasis>Upstream documentation:</emphasis>
<link xlink:href="https://taskwarrior.org/docs/#taskd"/>
</para>
<section>
<title>Configuration</title>
<para>
Taskserver does all of its authentication via TLS using client
certificates, so you either need to roll your own CA or purchase a
certificate from a known CA, which allows creation of client
certificates.
These certificates are usually advertised as
<quote>server certificates</quote>.
</para>
<para>
So in order to make it easier to handle your own CA, there is a helper
tool called <command>nixos-taskserver</command> which manages the custom
CA along with Taskserver organisations, users and groups.
</para>
<para>
While the client certificates in Taskserver only authenticate whether a
user is allowed to connect, every user has its own UUID which identifies
it as an entity.
</para>
<para>
With <command>nixos-taskserver</command> the client certificate is created
along with the UUID of the user, so it handles all of the credentials
needed in order to setup the Taskwarrior client to work with a Taskserver.
</para>
</section>
<section>
<title>The nixos-taskserver tool</title>
<para>
Because Taskserver by default only provides scripts to setup users
imperatively, the <command>nixos-taskserver</command> tool is used for
addition and deletion of organisations along with users and groups defined
by <option>services.taskserver.organisations</option> and as well for
imperative set up.
</para>
<para>
The tool is designed to not interfere if the command is used to manually
set up some organisations, users or groups.
</para>
<para>
For example if you add a new organisation using
<command>nixos-taskserver org add foo</command>, the organisation is not
modified and deleted no matter what you define in
<option>services.taskserver.organisations</option>, even if you're adding
the same organisation in that option.
</para>
<para>
The tool is modelled to imitate the official <command>taskd</command>
command, documentation for each subcommand can be shown by using the
<option>--help</option> switch.
</para>
</section>
<section>
<title>Declarative/automatic CA management</title>
<para>
Everything is done according to what you specify in the module options,
however in order to set up a Taskwarrior client for synchronisation with a
Taskserver instance, you have to transfer the keys and certificates to the
client machine.
</para>
<para>
This is done using
<command>nixos-taskserver user export $orgname $username</command> which
is printing a shell script fragment to stdout which can either be used
verbatim or adjusted to import the user on the client machine.
</para>
<para>
For example, let's say you have the following configuration:
<screen>
{
services.taskserver.enable = true;
services.taskserver.fqdn = "server";
services.taskserver.listenHost = "::";
services.taskserver.organisations.my-company.users = [ "alice" ];
}
</screen>
This creates an organisation called <literal>my-company</literal> with the
user <literal>alice</literal>.
</para>
<para>
Now in order to import the <literal>alice</literal> user to another
machine <literal>alicebox</literal>, all we need to do is something like
this:
<screen>
$ ssh server nixos-taskserver user export my-company alice | sh
</screen>
Of course, if no SSH daemon is available on the server you can also copy
&amp; paste it directly into a shell.
</para>
<para>
After this step the user should be set up and you can start synchronising
your tasks for the first time with <command>task sync init</command> on
<literal>alicebox</literal>.
</para>
<para>
Subsequent synchronisation requests merely require the command
<command>task sync</command> after that stage.
</para>
</section>
<section>
<title>Manual CA management</title>
<para>
If you set any options within
<option>service.taskserver.pki.manual.*</option>, the automatic user and
CA management by the <command>nixos-taskserver</command> is disabled and
you need to create certificates and keys by yourself.
</para>
</section>
</chapter>

View file

@ -0,0 +1,673 @@
import grp
import json
import pwd
import os
import re
import string
import subprocess
import sys
from contextlib import contextmanager
from shutil import rmtree
from tempfile import NamedTemporaryFile
import click
CERTTOOL_COMMAND = "@certtool@"
CERT_BITS = "@certBits@"
CLIENT_EXPIRATION = "@clientExpiration@"
CRL_EXPIRATION = "@crlExpiration@"
TASKD_COMMAND = "@taskd@"
TASKD_DATA_DIR = "@dataDir@"
TASKD_USER = "@user@"
TASKD_GROUP = "@group@"
FQDN = "@fqdn@"
CA_KEY = os.path.join(TASKD_DATA_DIR, "keys", "ca.key")
CA_CERT = os.path.join(TASKD_DATA_DIR, "keys", "ca.cert")
CRL_FILE = os.path.join(TASKD_DATA_DIR, "keys", "server.crl")
RE_CONFIGUSER = re.compile(r'^\s*user\s*=(.*)$')
RE_USERKEY = re.compile(r'New user key: (.+)$', re.MULTILINE)
def lazyprop(fun):
"""
Decorator which only evaluates the specified function when accessed.
"""
name = '_lazy_' + fun.__name__
@property
def _lazy(self):
val = getattr(self, name, None)
if val is None:
val = fun(self)
setattr(self, name, val)
return val
return _lazy
class TaskdError(OSError):
pass
def run_as_taskd_user():
uid = pwd.getpwnam(TASKD_USER).pw_uid
gid = grp.getgrnam(TASKD_GROUP).gr_gid
os.setgid(gid)
os.setuid(uid)
def taskd_cmd(cmd, *args, **kwargs):
"""
Invoke taskd with the specified command with the privileges of the 'taskd'
user and 'taskd' group.
If 'capture_stdout' is passed as a keyword argument with the value True,
the return value are the contents the command printed to stdout.
"""
capture_stdout = kwargs.pop("capture_stdout", False)
fun = subprocess.check_output if capture_stdout else subprocess.check_call
return fun(
[TASKD_COMMAND, cmd, "--data", TASKD_DATA_DIR] + list(args),
preexec_fn=run_as_taskd_user,
**kwargs
)
def certtool_cmd(*args, **kwargs):
"""
Invoke certtool from GNUTLS and return the output of the command.
The provided arguments are added to the certtool command and keyword
arguments are added to subprocess.check_output().
Note that this will suppress all output of certtool and it will only be
printed whenever there is an unsuccessful return code.
"""
return subprocess.check_output(
[CERTTOOL_COMMAND] + list(args),
preexec_fn=lambda: os.umask(0077),
stderr=subprocess.STDOUT,
**kwargs
)
def label(msg):
if sys.stdout.isatty() or sys.stderr.isatty():
sys.stderr.write(msg + "\n")
def mkpath(*args):
return os.path.join(TASKD_DATA_DIR, "orgs", *args)
def mark_imperative(*path):
"""
Mark the specified path as being imperatively managed by creating an empty
file called ".imperative", so that it doesn't interfere with the
declarative configuration.
"""
open(os.path.join(mkpath(*path), ".imperative"), 'a').close()
def is_imperative(*path):
"""
Check whether the given path is marked as imperative, see mark_imperative()
for more information.
"""
full_path = []
for component in path:
full_path.append(component)
if os.path.exists(os.path.join(mkpath(*full_path), ".imperative")):
return True
return False
def fetch_username(org, key):
for line in open(mkpath(org, "users", key, "config"), "r"):
match = RE_CONFIGUSER.match(line)
if match is None:
continue
return match.group(1).strip()
return None
@contextmanager
def create_template(contents):
"""
Generate a temporary file with the specified contents as a list of strings
and yield its path as the context.
"""
template = NamedTemporaryFile(mode="w", prefix="certtool-template")
template.writelines(map(lambda l: l + "\n", contents))
template.flush()
yield template.name
template.close()
def generate_key(org, user):
basedir = os.path.join(TASKD_DATA_DIR, "keys", org, user)
if os.path.exists(basedir):
raise OSError("Keyfile directory for {} already exists.".format(user))
privkey = os.path.join(basedir, "private.key")
pubcert = os.path.join(basedir, "public.cert")
try:
os.makedirs(basedir, mode=0700)
certtool_cmd("-p", "--bits", CERT_BITS, "--outfile", privkey)
template_data = [
"organization = {0}".format(org),
"cn = {}".format(FQDN),
"expiration_days = {}".format(CLIENT_EXPIRATION),
"tls_www_client",
"encryption_key",
"signing_key"
]
with create_template(template_data) as template:
certtool_cmd(
"-c",
"--load-privkey", privkey,
"--load-ca-privkey", CA_KEY,
"--load-ca-certificate", CA_CERT,
"--template", template,
"--outfile", pubcert
)
except:
rmtree(basedir)
raise
def revoke_key(org, user):
basedir = os.path.join(TASKD_DATA_DIR, "keys", org, user)
if not os.path.exists(basedir):
raise OSError("Keyfile directory for {} doesn't exist.".format(user))
pubcert = os.path.join(basedir, "public.cert")
expiration = "expiration_days = {}".format(CRL_EXPIRATION)
with create_template([expiration]) as template:
oldcrl = NamedTemporaryFile(mode="wb", prefix="old-crl")
oldcrl.write(open(CRL_FILE, "rb").read())
oldcrl.flush()
certtool_cmd(
"--generate-crl",
"--load-crl", oldcrl.name,
"--load-ca-privkey", CA_KEY,
"--load-ca-certificate", CA_CERT,
"--load-certificate", pubcert,
"--template", template,
"--outfile", CRL_FILE
)
oldcrl.close()
rmtree(basedir)
def is_key_line(line, match):
return line.startswith("---") and line.lstrip("- ").startswith(match)
def getkey(*args):
path = os.path.join(TASKD_DATA_DIR, "keys", *args)
buf = []
for line in open(path, "r"):
if len(buf) == 0:
if is_key_line(line, "BEGIN"):
buf.append(line)
continue
buf.append(line)
if is_key_line(line, "END"):
return ''.join(buf)
raise IOError("Unable to get key from {}.".format(path))
def mktaskkey(cfg, path, keydata):
heredoc = 'cat > "{}" <<EOF\n{}EOF'.format(path, keydata)
cmd = 'task config taskd.{} -- "{}"'.format(cfg, path)
return heredoc + "\n" + cmd
class User(object):
def __init__(self, org, name, key):
self.__org = org
self.name = name
self.key = key
def export(self):
pubcert = getkey(self.__org, self.name, "public.cert")
privkey = getkey(self.__org, self.name, "private.key")
cacert = getkey("ca.cert")
keydir = "${TASKDATA:-$HOME/.task}/keys"
credentials = '/'.join([self.__org, self.name, self.key])
allow_unquoted = string.ascii_letters + string.digits + "/-_."
if not all((c in allow_unquoted) for c in credentials):
credentials = "'" + credentials.replace("'", r"'\''") + "'"
script = [
"umask 0077",
'mkdir -p "{}"'.format(keydir),
mktaskkey("certificate", os.path.join(keydir, "public.cert"),
pubcert),
mktaskkey("key", os.path.join(keydir, "private.key"), privkey),
mktaskkey("ca", os.path.join(keydir, "ca.cert"), cacert),
"task config taskd.credentials -- {}".format(credentials)
]
return "\n".join(script) + "\n"
class Group(object):
def __init__(self, org, name):
self.__org = org
self.name = name
class Organisation(object):
def __init__(self, name, ignore_imperative):
self.name = name
self.ignore_imperative = ignore_imperative
def add_user(self, name):
"""
Create a new user along with a certificate and key.
Returns a 'User' object or None if the user already exists.
"""
if self.ignore_imperative and is_imperative(self.name):
return None
if name not in self.users.keys():
output = taskd_cmd("add", "user", self.name, name,
capture_stdout=True)
key = RE_USERKEY.search(output)
if key is None:
msg = "Unable to find key while creating user {}."
raise TaskdError(msg.format(name))
generate_key(self.name, name)
newuser = User(self.name, name, key.group(1))
self._lazy_users[name] = newuser
return newuser
return None
def del_user(self, name):
"""
Delete a user and revoke its keys.
"""
if name in self.users.keys():
user = self.get_user(name)
if self.ignore_imperative and \
is_imperative(self.name, "users", user.key):
return
# Work around https://bug.tasktools.org/browse/TD-40:
rmtree(mkpath(self.name, "users", user.key))
revoke_key(self.name, name)
del self._lazy_users[name]
def add_group(self, name):
"""
Create a new group.
Returns a 'Group' object or None if the group already exists.
"""
if self.ignore_imperative and is_imperative(self.name):
return None
if name not in self.groups.keys():
taskd_cmd("add", "group", self.name, name)
newgroup = Group(self.name, name)
self._lazy_groups[name] = newgroup
return newgroup
return None
def del_group(self, name):
"""
Delete a group.
"""
if name in self.users.keys():
if self.ignore_imperative and \
is_imperative(self.name, "groups", name):
return
taskd_cmd("remove", "group", self.name, name)
del self._lazy_groups[name]
def get_user(self, name):
return self.users.get(name)
@lazyprop
def users(self):
result = {}
for key in os.listdir(mkpath(self.name, "users")):
user = fetch_username(self.name, key)
if user is not None:
result[user] = User(self.name, user, key)
return result
def get_group(self, name):
return self.groups.get(name)
@lazyprop
def groups(self):
result = {}
for group in os.listdir(mkpath(self.name, "groups")):
result[group] = Group(self.name, group)
return result
class Manager(object):
def __init__(self, ignore_imperative=False):
"""
Instantiates an organisations manager.
If ignore_imperative is True, all actions that modify data are checked
whether they're created imperatively and if so, they will result in no
operation.
"""
self.ignore_imperative = ignore_imperative
def add_org(self, name):
"""
Create a new organisation.
Returns an 'Organisation' object or None if the organisation already
exists.
"""
if name not in self.orgs.keys():
taskd_cmd("add", "org", name)
neworg = Organisation(name, self.ignore_imperative)
self._lazy_orgs[name] = neworg
return neworg
return None
def del_org(self, name):
"""
Delete and revoke keys of an organisation with all its users and
groups.
"""
org = self.get_org(name)
if org is not None:
if self.ignore_imperative and is_imperative(name):
return
for user in org.users.keys():
org.del_user(user)
for group in org.groups.keys():
org.del_group(group)
taskd_cmd("remove", "org", name)
del self._lazy_orgs[name]
def get_org(self, name):
return self.orgs.get(name)
@lazyprop
def orgs(self):
result = {}
for org in os.listdir(mkpath()):
result[org] = Organisation(org, self.ignore_imperative)
return result
class OrganisationType(click.ParamType):
name = 'organisation'
def convert(self, value, param, ctx):
org = Manager().get_org(value)
if org is None:
self.fail("Organisation {} does not exist.".format(value))
return org
ORGANISATION = OrganisationType()
@click.group()
@click.pass_context
def cli(ctx):
"""
Manage Taskserver users and certificates
"""
for path in (CA_KEY, CA_CERT, CRL_FILE):
if not os.path.exists(path):
msg = "CA setup not done or incomplete, missing file {}."
ctx.fail(msg.format(path))
@cli.group("org")
def org_cli():
"""
Manage organisations
"""
pass
@cli.group("user")
def user_cli():
"""
Manage users
"""
pass
@cli.group("group")
def group_cli():
"""
Manage groups
"""
pass
@user_cli.command("list")
@click.argument("organisation", type=ORGANISATION)
def list_users(organisation):
"""
List all users belonging to the specified organisation.
"""
label("The following users exists for {}:".format(organisation.name))
for user in organisation.users.values():
sys.stdout.write(user.name + "\n")
@group_cli.command("list")
@click.argument("organisation", type=ORGANISATION)
def list_groups(organisation):
"""
List all users belonging to the specified organisation.
"""
label("The following users exists for {}:".format(organisation.name))
for group in organisation.groups.values():
sys.stdout.write(group.name + "\n")
@org_cli.command("list")
def list_orgs():
"""
List available organisations
"""
label("The following organisations exist:")
for org in Manager().orgs:
sys.stdout.write(org.name + "\n")
@user_cli.command("getkey")
@click.argument("organisation", type=ORGANISATION)
@click.argument("user")
def get_uuid(organisation, user):
"""
Get the UUID of the specified user belonging to the specified organisation.
"""
userobj = organisation.get_user(user)
if userobj is None:
msg = "User {} doesn't exist in organisation {}."
sys.exit(msg.format(userobj.name, organisation.name))
label("User {} has the following UUID:".format(userobj.name))
sys.stdout.write(user.key + "\n")
@user_cli.command("export")
@click.argument("organisation", type=ORGANISATION)
@click.argument("user")
def export_user(organisation, user):
"""
Export user of the specified organisation as a series of shell commands
that can be used on the client side to easily import the certificates.
Note that the private key will be exported as well, so use this with care!
"""
userobj = organisation.get_user(user)
if userobj is None:
msg = "User {} doesn't exist in organisation {}."
sys.exit(msg.format(userobj.name, organisation.name))
sys.stdout.write(userobj.export())
@org_cli.command("add")
@click.argument("name")
def add_org(name):
"""
Create an organisation with the specified name.
"""
if os.path.exists(mkpath(name)):
msg = "Organisation with name {} already exists."
sys.exit(msg.format(name))
taskd_cmd("add", "org", name)
mark_imperative(name)
@org_cli.command("remove")
@click.argument("name")
def del_org(name):
"""
Delete the organisation with the specified name.
All of the users and groups will be deleted as well and client certificates
will be revoked.
"""
Manager().del_org(name)
msg = ("Organisation {} deleted. Be sure to restart the Taskserver"
" using 'systemctl restart taskserver.service' in order for"
" the certificate revocation to apply.")
click.echo(msg.format(name), err=True)
@user_cli.command("add")
@click.argument("organisation", type=ORGANISATION)
@click.argument("user")
def add_user(organisation, user):
"""
Create a user for the given organisation along with a client certificate
and print the key of the new user.
The client certificate along with it's public key can be shown via the
'user export' subcommand.
"""
userobj = organisation.add_user(user)
if userobj is None:
msg = "User {} already exists in organisation {}."
sys.exit(msg.format(user, organisation))
else:
mark_imperative(organisation.name, "users", userobj.key)
@user_cli.command("remove")
@click.argument("organisation", type=ORGANISATION)
@click.argument("user")
def del_user(organisation, user):
"""
Delete a user from the given organisation.
This will also revoke the client certificate of the given user.
"""
organisation.del_user(user)
msg = ("User {} deleted. Be sure to restart the Taskserver using"
" 'systemctl restart taskserver.service' in order for the"
" certificate revocation to apply.")
click.echo(msg.format(user), err=True)
@group_cli.command("add")
@click.argument("organisation", type=ORGANISATION)
@click.argument("group")
def add_group(organisation, group):
"""
Create a group for the given organisation.
"""
groupobj = organisation.add_group(group)
if groupobj is None:
msg = "Group {} already exists in organisation {}."
sys.exit(msg.format(group, organisation))
else:
mark_imperative(organisation.name, "groups", groupobj.name)
@group_cli.command("remove")
@click.argument("organisation", type=ORGANISATION)
@click.argument("group")
def del_group(organisation, group):
"""
Delete a group from the given organisation.
"""
organisation.del_group(group)
click("Group {} deleted.".format(group), err=True)
def add_or_delete(old, new, add_fun, del_fun):
"""
Given an 'old' and 'new' list, figure out the intersections and invoke
'add_fun' against every element that is not in the 'old' list and 'del_fun'
against every element that is not in the 'new' list.
Returns a tuple where the first element is the list of elements that were
added and the second element consisting of elements that were deleted.
"""
old_set = set(old)
new_set = set(new)
to_delete = old_set - new_set
to_add = new_set - old_set
for elem in to_delete:
del_fun(elem)
for elem in to_add:
add_fun(elem)
return to_add, to_delete
@cli.command("process-json")
@click.argument('json-file', type=click.File('rb'))
def process_json(json_file):
"""
Create and delete users, groups and organisations based on a JSON file.
The structure of this file is exactly the same as the
'services.taskserver.organisations' option of the NixOS module and is used
for declaratively adding and deleting users.
Hence this subcommand is not recommended outside of the scope of the NixOS
module.
"""
data = json.load(json_file)
mgr = Manager(ignore_imperative=True)
add_or_delete(mgr.orgs.keys(), data.keys(), mgr.add_org, mgr.del_org)
for org in mgr.orgs.values():
if is_imperative(org.name):
continue
add_or_delete(org.users.keys(), data[org.name]['users'],
org.add_user, org.del_user)
add_or_delete(org.groups.keys(), data[org.name]['groups'],
org.add_group, org.del_group)
if __name__ == '__main__':
cli()

View file

@ -71,7 +71,7 @@ in {
after = [ "network.target" "docker.service" "influxdb.service" ];
postStart = mkBefore ''
until ${pkgs.curl}/bin/curl -s -o /dev/null 'http://${cfg.listenAddress}:${toString cfg.port}/containers/'; do
until ${pkgs.curl.bin}/bin/curl -s -o /dev/null 'http://${cfg.listenAddress}:${toString cfg.port}/containers/'; do
sleep 1;
done
'';

View file

@ -509,7 +509,7 @@ in {
};
in "${aenv}/${pkgs.python.sitePackages}";
GRAPHITE_API_CONFIG = graphiteApiConfig;
LD_LIBRARY_PATH = "${pkgs.cairo}/lib";
LD_LIBRARY_PATH = "${pkgs.cairo.out}/lib";
};
serviceConfig = {
ExecStart = ''

View file

@ -26,6 +26,15 @@ in
The port on which the introducer will listen.
'';
};
tub.location = mkOption {
default = null;
type = types.nullOr types.str;
description = ''
The external location that the introducer should listen on.
If specified, the port should be included.
'';
};
package = mkOption {
default = pkgs.tahoelafs;
defaultText = "pkgs.tahoelafs";
@ -60,6 +69,18 @@ in
system to listen on a different port.
'';
};
tub.location = mkOption {
default = null;
type = types.nullOr types.str;
description = ''
The external location that the node should listen on.
This is the setting to tweak if there are multiple interfaces
and you want to alter which interface Tahoe is advertising.
If specified, the port should be included.
'';
};
web.port = mkOption {
default = 3456;
type = types.int;
@ -144,6 +165,8 @@ in
[node]
nickname = ${settings.nickname}
tub.port = ${toString settings.tub.port}
${optionalString (settings.tub.location != null)
"tub.location = ${settings.tub.location}"}
'';
});
# Actually require Tahoe, so that we will have it installed.
@ -209,6 +232,8 @@ in
[node]
nickname = ${settings.nickname}
tub.port = ${toString settings.tub.port}
${optionalString (settings.tub.location != null)
"tub.location = ${settings.tub.location}"}
# This is a Twisted endpoint. Twisted Web doesn't work on
# non-TCP. ~ C.
web.port = tcp:${toString settings.web.port}

View file

@ -27,10 +27,17 @@ in
'';
};
user = mkOption {
type = types.str;
default = "nobody";
description =
"User to run u9fs under.";
};
extraArgs = mkOption {
type = types.str;
default = "";
example = "-a none -u nobody";
example = "-a none";
description =
''
Extra arguments to pass on invocation,
@ -38,13 +45,6 @@ in
'';
};
fsroot = mkOption {
type = types.path;
default = "/";
example = "/srv";
description = "File system root to serve to clients.";
};
};
};
@ -63,9 +63,10 @@ in
reloadIfChanged = true;
requires = [ "u9fs.socket" ];
serviceConfig =
{ ExecStart = "-${pkgs.u9fs}/bin/u9fs ${cfg.extraArgs} ${cfg.fsroot}";
{ ExecStart = "-${pkgs.u9fs}/bin/u9fs ${cfg.extraArgs}";
StandardInput = "socket";
StandardError = "journal";
User = cfg.user;
};
};
};

View file

@ -151,7 +151,7 @@ in
/etc/group r,
${config.environment.etc."nsswitch.conf".source} r,
${pkgs.glibc}/lib/*.so mr,
${pkgs.glibc.out}/lib/*.so mr,
${pkgs.tzdata}/share/zoneinfo/** r,
network inet stream,
@ -159,15 +159,15 @@ in
network inet dgram,
network inet6 dgram,
${pkgs.gcc.cc}/lib/libssp.so.* mr,
${pkgs.libsodium}/lib/libsodium.so.* mr,
${pkgs.gcc.cc.lib}/lib/libssp.so.* mr,
${pkgs.libsodium.out}/lib/libsodium.so.* mr,
${pkgs.systemd}/lib/libsystemd.so.* mr,
${pkgs.xz}/lib/liblzma.so.* mr,
${pkgs.libgcrypt}/lib/libgcrypt.so.* mr,
${pkgs.libgpgerror}/lib/libgpg-error.so.* mr,
${pkgs.libcap}/lib/libcap.so.* mr,
${pkgs.xz.out}/lib/liblzma.so.* mr,
${pkgs.libgcrypt.out}/lib/libgcrypt.so.* mr,
${pkgs.libgpgerror.out}/lib/libgpg-error.so.* mr,
${pkgs.libcap.out}/lib/libcap.so.* mr,
${pkgs.lz4}/lib/liblz4.so.* mr,
${pkgs.attr}/lib/libattr.so.* mr,
${pkgs.attr.out}/lib/libattr.so.* mr,
${resolverListFile} r,
}

View file

@ -8,7 +8,7 @@ let
homeDir = "/var/lib/i2pd";
extip = "EXTIP=\$(${pkgs.curl}/bin/curl -sf \"http://jsonip.com\" | ${pkgs.gawk}/bin/awk -F'\"' '{print $4}')";
extip = "EXTIP=\$(${pkgs.curl.bin}/bin/curl -sf \"http://jsonip.com\" | ${pkgs.gawk}/bin/awk -F'\"' '{print $4}')";
toYesNo = b: if b then "yes" else "no";

View file

@ -0,0 +1,136 @@
# NixOS module for iodine, ip over dns daemon
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.iodine;
iodinedUser = "iodined";
in
{
### configuration
options = {
services.iodine = {
clients = mkOption {
default = {};
description = ''
Each attribute of this option defines a systemd service that
runs iodine. Many or none may be defined.
The name of each service is
<literal>iodine-<replaceable>name</replaceable></literal>
where <replaceable>name</replaceable> is the name of the
corresponding attribute name.
'';
example = literalExample ''
{
foo = {
server = "tunnel.mdomain.com";
relay = "8.8.8.8";
extraConfig = "-P mysecurepassword";
}
}
'';
type = types.attrsOf (types.submodule (
{
options = {
server = mkOption {
type = types.str;
default = "";
description = "Domain or Subdomain of server running iodined";
example = "tunnel.mydomain.com";
};
relay = mkOption {
type = types.str;
default = "";
description = "DNS server to use as a intermediate relay to the iodined server";
example = "8.8.8.8";
};
extraConfig = mkOption {
type = types.str;
default = "";
description = "Additional command line parameters";
example = "-P mysecurepassword -l 192.168.1.10 -p 23";
};
};
}));
};
server = {
enable = mkOption {
type = types.bool;
default = false;
description = "enable iodined server";
};
ip = mkOption {
type = types.str;
default = "";
description = "The assigned ip address or ip range";
example = "172.16.10.1/24";
};
domain = mkOption {
type = types.str;
default = "";
description = "Domain or subdomain of which nameservers point to us";
example = "tunnel.mydomain.com";
};
extraConfig = mkOption {
type = types.str;
default = "";
description = "Additional command line parameters";
example = "-P mysecurepassword -l 192.168.1.10 -p 23";
};
};
};
};
### implementation
config = mkIf (cfg.server.enable || cfg.clients != {}) {
environment.systemPackages = [ pkgs.iodine ];
boot.kernelModules = [ "tun" ];
systemd.services =
let
createIodineClientService = name: cfg:
{
description = "iodine client - ${name}";
wantedBy = [ "ip-up.target" ];
serviceConfig = {
RestartSec = "30s";
Restart = "always";
ExecStart = "${pkgs.iodine}/bin/iodine -f -u ${iodinedUser} ${cfg.extraConfig} ${cfg.relay} ${cfg.server}";
};
};
in
listToAttrs (
mapAttrsToList
(name: value: nameValuePair "iodine-${name}" (createIodineClientService name value))
cfg.clients
) // {
iodined = mkIf (cfg.server.enable) {
description = "iodine, ip over dns server daemon";
wantedBy = [ "ip-up.target" ];
serviceConfig.ExecStart = "${pkgs.iodine}/bin/iodined -f -u ${iodinedUser} ${cfg.server.extraConfig} ${cfg.server.ip} ${cfg.server.domain}";
};
};
users.extraUsers = singleton {
name = iodinedUser;
uid = config.ids.uids.iodined;
description = "Iodine daemon user";
};
users.extraGroups.iodined.gid = config.ids.gids.iodined;
};
}

View file

@ -1,86 +0,0 @@
# NixOS module for iodine, ip over dns daemon
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.iodined;
iodinedUser = "iodined";
in
{
### configuration
options = {
services.iodined = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable iodine, ip over dns daemon";
};
client = mkOption {
type = types.bool;
default = false;
description = "Start iodine in client mode";
};
ip = mkOption {
type = types.str;
default = "";
description = "Assigned ip address or ip range";
example = "172.16.10.1/24";
};
domain = mkOption {
type = types.str;
default = "";
description = "Domain or subdomain of which nameservers point to us";
example = "tunnel.mydomain.com";
};
extraConfig = mkOption {
type = types.str;
default = "";
description = "Additional command line parameters";
example = "-P mysecurepassword -l 192.168.1.10 -p 23";
};
};
};
### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.iodine ];
boot.kernelModules = [ "tun" ];
systemd.services.iodined = {
description = "iodine, ip over dns daemon";
wantedBy = [ "ip-up.target" ];
serviceConfig.ExecStart = "${pkgs.iodine}/sbin/iodined -f -u ${iodinedUser} ${cfg.extraConfig} ${cfg.ip} ${cfg.domain}";
};
users.extraUsers = singleton {
name = iodinedUser;
uid = config.ids.uids.iodined;
description = "Iodine daemon user";
};
users.extraGroups.iodined.gid = config.ids.gids.iodined;
assertions = [{ assertion = if !cfg.client then cfg.ip != "" else true;
message = "cannot start iodined without ip set";}
{ assertion = cfg.domain != "";
message = "cannot start iodined without domain name set";}];
};
}

View file

@ -58,9 +58,9 @@ in
services.minidlna.config =
''
port=${toString port}
friendly_name=NixOS Media Server
friendly_name=${config.networking.hostName} MiniDLNA
db_dir=/var/cache/minidlna
log_dir=/var/log/minidlna
log_level=warn
inotify=yes
${concatMapStrings (dir: ''
media_dir=${dir}
@ -83,21 +83,18 @@ in
preStart =
''
mkdir -p /var/cache/minidlna /var/log/minidlna /run/minidlna
chown minidlna /var/cache/minidlna /var/log/minidlna /run/minidlna
mkdir -p /var/cache/minidlna
chown -R minidlna:minidlna /var/cache/minidlna
'';
# FIXME: log through the journal rather than
# /var/log/minidlna. The -d flag does that, but also raises
# the log level to debug...
serviceConfig =
{ User = "minidlna";
Group = "nogroup";
Group = "minidlna";
PermissionsStartOnly = true;
Type = "forking";
RuntimeDirectory = "minidlna";
PIDFile = "/run/minidlna/pid";
ExecStart =
"@${pkgs.minidlna}/sbin/minidlnad minidlnad -P /run/minidlna/pid" +
"${pkgs.minidlna}/sbin/minidlnad -S -P /run/minidlna/pid" +
" -f ${pkgs.writeText "minidlna.conf" cfg.config}";
};
};

View file

@ -50,7 +50,7 @@ in
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
path = [ config.nix.package pkgs.bzip2 ];
path = [ config.nix.package pkgs.bzip2.bin ];
environment.NIX_REMOTE = "daemon";
environment.NIX_SECRET_KEY_FILE = cfg.secretKeyFile;

View file

@ -224,7 +224,7 @@ in
serviceConfig.ExecStart = "${nntp-proxy}/bin/nntp-proxy ${confFile}";
preStart = ''
if [ ! \( -f ${cfg.sslCert} -a -f ${cfg.sslKey} \) ]; then
${pkgs.openssl}/bin/openssl req -subj '/CN=AutoGeneratedCert/O=NixOS Service/C=US' \
${pkgs.openssl.bin}/bin/openssl req -subj '/CN=AutoGeneratedCert/O=NixOS Service/C=US' \
-new -newkey rsa:2048 -days 365 -nodes -x509 -keyout ${cfg.sslKey} -out ${cfg.sslCert};
fi
'';

View file

@ -6,6 +6,21 @@ let
cfg = config.services.shout;
shoutHome = "/var/lib/shout";
defaultConfig = pkgs.runCommand "config.js" {} ''
EDITOR=true ${pkgs.shout}/bin/shout config --home $PWD
mv config.js $out
'';
finalConfigFile = if (cfg.configFile != null) then cfg.configFile else ''
var _ = require('${pkgs.shout}/lib/node_modules/shout/node_modules/lodash')
module.exports = _.merge(
{},
require('${defaultConfig}'),
${builtins.toJSON cfg.config}
)
'';
in {
options.services.shout = {
enable = mkEnableOption "Shout web IRC client";
@ -35,8 +50,31 @@ in {
type = types.nullOr types.lines;
default = null;
description = ''
Contents of Shout's <filename>config.js</filename> file. If left empty,
Shout will generate from its defaults at first startup.
Contents of Shout's <filename>config.js</filename> file.
Used for backward compatibility, recommended way is now to use
the <literal>config</literal> option.
Documentation: http://shout-irc.com/docs/server/configuration.html
'';
};
config = mkOption {
default = {};
type = types.attrs;
example = {
displayNetwork = false;
defaults = {
name = "Your Network";
host = "localhost";
port = 6697;
};
};
description = ''
Shout <filename>config.js</filename> contents as attribute set (will be
converted to JSON to generate the configuration file).
The options defined here will be merged to the default configuration file.
Documentation: http://shout-irc.com/docs/server/configuration.html
'';
@ -57,11 +95,7 @@ in {
wantedBy = [ "multi-user.target" ];
wants = [ "network-online.target" ];
after = [ "network-online.target" ];
preStart = if isNull cfg.configFile then ""
else ''
ln -sf ${pkgs.writeText "config.js" cfg.configFile} \
${shoutHome}/config.js
'';
preStart = "ln -sf ${pkgs.writeText "config.js" finalConfigFile} ${shoutHome}/config.js";
script = concatStringsSep " " [
"${pkgs.shout}/bin/shout"
(if cfg.private then "--private" else "--public")

View file

@ -7,6 +7,21 @@ let
cfg = config.services.syncthing;
defaultUser = "syncthing";
header = {
description = "Syncthing service";
environment = {
STNORESTART = "yes";
STNOUPGRADE = "yes";
inherit (cfg) all_proxy;
} // config.networking.proxy.envVars;
};
service = {
Restart = "on-failure";
SuccessExitStatus = "2 3 4";
RestartForceExitStatus="3 4";
};
in
{
@ -17,22 +32,33 @@ in
services.syncthing = {
enable = mkOption {
enable = mkEnableOption ''
Syncthing - the self-hosted open-source alternative
to Dropbox and Bittorrent Sync. Initial interface will be
available on http://127.0.0.1:8384/.
'';
systemService = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable the Syncthing, self-hosted open-source alternative
to Dropbox and BittorrentSync. Initial interface will be
available on http://127.0.0.1:8384/.
'';
default = true;
description = "Auto launch Syncthing as a system service.";
};
user = mkOption {
type = types.string;
default = defaultUser;
description = ''
Syncthing will be run under this user (user must exist,
this can be your user name).
Syncthing will be run under this user (user will be created if it doesn't exist.
This can be your user name).
'';
};
group = mkOption {
type = types.string;
default = "nogroup";
description = ''
Syncthing will be run under this group (group will not be created if it doesn't exist.
This can be your user name).
'';
};
@ -64,10 +90,7 @@ in
Syncthing package to use.
'';
};
};
};
@ -77,7 +100,7 @@ in
users = mkIf (cfg.user == defaultUser) {
extraUsers."${defaultUser}" =
{ group = defaultUser;
{ group = cfg.group;
home = cfg.dataDir;
createHome = true;
uid = config.ids.uids.syncthing;
@ -88,30 +111,27 @@ in
config.ids.gids.syncthing;
};
systemd.services.syncthing =
{
description = "Syncthing service";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
environment = {
STNORESTART = "yes"; # do not self-restart
STNOUPGRADE = "yes";
inherit (cfg) all_proxy;
} // config.networking.proxy.envVars;
serviceConfig = {
User = cfg.user;
Group = optionalString (cfg.user == defaultUser) defaultUser;
PermissionsStartOnly = true;
Restart = "on-failure";
ExecStart = "${pkgs.syncthing}/bin/syncthing -no-browser -home=${cfg.dataDir}";
SuccessExitStatus = "2 3 4";
RestartForceExitStatus="3 4";
};
};
environment.systemPackages = [ cfg.package ];
};
systemd.services = mkIf cfg.systemService {
syncthing = header // {
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = service // {
User = cfg.user;
Group = cfg.group;
PermissionsStartOnly = true;
ExecStart = "${pkgs.syncthing}/bin/syncthing -no-browser -home=${cfg.dataDir}";
};
};
};
systemd.user.services = {
syncthing = header // {
serviceConfig = service // {
ExecStart = "${pkgs.syncthing}/bin/syncthing -no-browser";
};
};
};
};
}

View file

@ -14,27 +14,27 @@ let
additionalBackends = pkgs.runCommand "additional-cups-backends" { }
''
mkdir -p $out
if [ ! -e ${cups}/lib/cups/backend/smb ]; then
if [ ! -e ${cups.out}/lib/cups/backend/smb ]; then
mkdir -p $out/lib/cups/backend
ln -sv ${pkgs.samba}/bin/smbspool $out/lib/cups/backend/smb
fi
# Provide support for printing via HTTPS.
if [ ! -e ${cups}/lib/cups/backend/https ]; then
if [ ! -e ${cups.out}/lib/cups/backend/https ]; then
mkdir -p $out/lib/cups/backend
ln -sv ${cups}/lib/cups/backend/ipp $out/lib/cups/backend/https
ln -sv ${cups.out}/lib/cups/backend/ipp $out/lib/cups/backend/https
fi
'';
# Here we can enable additional backends, filters, etc. that are not
# part of CUPS itself, e.g. the SMB backend is part of Samba. Since
# we can't update ${cups}/lib/cups itself, we create a symlink tree
# we can't update ${cups.out}/lib/cups itself, we create a symlink tree
# here and add the additional programs. The ServerBin directive in
# cupsd.conf tells cupsd to use this tree.
bindir = pkgs.buildEnv {
name = "cups-progs";
paths =
[ cups additionalBackends cups_filters pkgs.ghostscript ]
[ cups.out additionalBackends cups_filters pkgs.ghostscript ]
++ optional cfg.gutenprint gutenprint
++ cfg.drivers;
pathsToLink = [ "/lib/cups" "/share/cups" "/bin" ];
@ -267,24 +267,24 @@ in
description = "CUPS printing services";
};
environment.systemPackages = [ cups ] ++ optional polkitEnabled cups-pk-helper;
environment.systemPackages = [ cups.out ] ++ optional polkitEnabled cups-pk-helper;
environment.etc."cups".source = "/var/lib/cups";
services.dbus.packages = [ cups ] ++ optional polkitEnabled cups-pk-helper;
services.dbus.packages = [ cups.out ] ++ optional polkitEnabled cups-pk-helper;
# Cups uses libusb to talk to printers, and does not use the
# linux kernel driver. If the driver is not in a black list, it
# gets loaded, and then cups cannot access the printers.
boot.blacklistedKernelModules = [ "usblp" ];
systemd.packages = [ cups ];
systemd.packages = [ cups.out ];
systemd.services.cups =
{ wantedBy = [ "multi-user.target" ];
wants = [ "network.target" ];
after = [ "network.target" ];
path = [ cups ];
path = [ cups.out ];
preStart =
''

View file

@ -148,7 +148,7 @@ in {
if [ "$(id -u)" = 0 ]; then chown -R elasticsearch ${cfg.dataDir}; fi
'';
postStart = mkBefore ''
until ${pkgs.curl}/bin/curl -s -o /dev/null ${cfg.listenAddress}:${toString cfg.port}; do
until ${pkgs.curl.bin}/bin/curl -s -o /dev/null ${cfg.listenAddress}:${toString cfg.port}; do
sleep 1
done
'';

View file

@ -121,7 +121,7 @@ in
security.setuidOwners = singleton
{ program = "dbus-daemon-launch-helper";
source = "${pkgs.dbus_daemon}/libexec/dbus-daemon-launch-helper";
source = "${pkgs.dbus_daemon.lib}/libexec/dbus-daemon-launch-helper";
owner = "root";
group = "messagebus";
setuid = true;
@ -139,30 +139,6 @@ in
systemd.services.dbus.restartTriggers = [ configDir ];
systemd.user = {
services.dbus = {
description = "D-Bus User Message Bus";
requires = [ "dbus.socket" ];
# NixOS doesn't support "Also" so we pull it in manually
# As the .service is supposed to come up at the same time as
# the .socket, we use basic.target instead of default.target
wantedBy = [ "basic.target" ];
serviceConfig = {
ExecStart = "${pkgs.dbus_daemon}/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation";
ExecReload = "${pkgs.dbus_daemon}/bin/dbus-send --print-reply --session --type=method_call --dest=org.freedesktop.DBus / org.freedesktop.DBus.ReloadConfig";
};
};
sockets.dbus = {
description = "D-Bus User Message Bus Socket";
socketConfig = {
ListenStream = "%t/bus";
ExecStartPost = "-${config.systemd.package}/bin/systemctl --user set-environment DBUS_SESSION_BUS_ADDRESS=unix:path=%t/bus";
};
wantedBy = [ "sockets.target" ];
};
};
environment.pathsToLink = [ "/etc/dbus-1" "/share/dbus-1" ];
};

View file

@ -64,14 +64,14 @@ in
restartTriggers = [ config.environment.etc.hosts.source config.environment.etc."nsswitch.conf".source ];
serviceConfig =
{ ExecStart = "@${pkgs.glibc}/sbin/nscd nscd -f ${cfgFile}";
{ ExecStart = "@${pkgs.glibc.bin}/sbin/nscd nscd -f ${cfgFile}";
Type = "forking";
PIDFile = "/run/nscd/nscd.pid";
Restart = "always";
ExecReload =
[ "${pkgs.glibc}/sbin/nscd --invalidate passwd"
"${pkgs.glibc}/sbin/nscd --invalidate group"
"${pkgs.glibc}/sbin/nscd --invalidate hosts"
[ "${pkgs.glibc.bin}/sbin/nscd --invalidate passwd"
"${pkgs.glibc.bin}/sbin/nscd --invalidate group"
"${pkgs.glibc.bin}/sbin/nscd --invalidate hosts"
];
};
@ -79,7 +79,7 @@ in
# its pid. So wait until it's ready.
postStart =
''
while ! ${pkgs.glibc}/sbin/nscd -g -f ${cfgFile} > /dev/null; do
while ! ${pkgs.glibc.bin}/sbin/nscd -g -f ${cfgFile} > /dev/null; do
sleep 0.2
done
'';

View file

@ -0,0 +1,100 @@
{ config, lib, pkgs, timezone, ... }:
with lib;
let
cfg = config.services.flexget;
pkg = pkgs.python27Packages.flexget;
ymlFile = pkgs.writeText "flexget.yml" ''
${cfg.config}
${optionalString cfg.systemScheduler "schedules: no"}
'';
configFile = "${toString cfg.homeDir}/flexget.yml";
in {
options = {
services.flexget = {
enable = mkEnableOption "Run FlexGet Daemon";
user = mkOption {
default = "deluge";
example = "some_user";
type = types.string;
description = "The user under which to run flexget.";
};
homeDir = mkOption {
default = "/var/lib/deluge";
example = "/home/flexget";
type = types.path;
description = "Where files live.";
};
interval = mkOption {
default = "10m";
example = "1h";
type = types.string;
description = "When to perform a <command>flexget</command> run. See <command>man 7 systemd.time</command> for the format.";
};
systemScheduler = mkOption {
default = true;
example = "false";
type = types.bool;
description = "When true, execute the runs via the flexget-runner.timer. If false, you have to specify the settings yourself in the YML file.";
};
config = mkOption {
default = "";
type = types.lines;
description = "The YAML configuration for FlexGet.";
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.python27Packages.flexget ];
systemd.services = {
flexget = {
description = "FlexGet Daemon";
path = [ pkgs.pythonPackages.flexget ];
serviceConfig = {
User = cfg.user;
Environment = "TZ=${config.time.timeZone}";
ExecStartPre = "${pkgs.coreutils}/bin/install -m644 ${ymlFile} ${configFile}";
ExecStart = "${pkg}/bin/flexget -c ${configFile} daemon start";
ExecStop = "${pkg}/bin/flexget -c ${configFile} daemon stop";
ExecReload = "${pkg}/bin/flexget -c ${configFile} daemon reload";
Restart = "on-failure";
PrivateTmp = true;
WorkingDirectory = toString cfg.homeDir;
};
wantedBy = [ "multi-user.target" ];
};
flexget-runner = mkIf cfg.systemScheduler {
description = "FlexGet Runner";
after = [ "flexget.service" ];
wants = [ "flexget.service" ];
serviceConfig = {
User = cfg.user;
ExecStart = "${pkg}/bin/flexget -c ${configFile} execute";
PrivateTmp = true;
WorkingDirectory = toString cfg.homeDir;
};
};
};
systemd.timers.flexget-runner = mkIf cfg.systemScheduler {
description = "Run FlexGet every ${cfg.interval}";
wantedBy = [ "timers.target" ];
timerConfig = {
OnBootSec = "5m";
OnUnitInactiveSec = cfg.interval;
Unit = "flexget-runner.service";
};
};
};
}

View file

@ -113,21 +113,21 @@ in
#include <abstractions/base>
#include <abstractions/nameservice>
${pkgs.glibc}/lib/*.so mr,
${pkgs.libevent}/lib/libevent*.so* mr,
${pkgs.curl}/lib/libcurl*.so* mr,
${pkgs.openssl}/lib/libssl*.so* mr,
${pkgs.openssl}/lib/libcrypto*.so* mr,
${pkgs.zlib}/lib/libz*.so* mr,
${pkgs.libssh2}/lib/libssh2*.so* mr,
${pkgs.glibc.out}/lib/*.so mr,
${pkgs.libevent.out}/lib/libevent*.so* mr,
${pkgs.curl.out}/lib/libcurl*.so* mr,
${pkgs.openssl.out}/lib/libssl*.so* mr,
${pkgs.openssl.out}/lib/libcrypto*.so* mr,
${pkgs.zlib.out}/lib/libz*.so* mr,
${pkgs.libssh2.out}/lib/libssh2*.so* mr,
${pkgs.systemd}/lib/libsystemd*.so* mr,
${pkgs.xz}/lib/liblzma*.so* mr,
${pkgs.libgcrypt}/lib/libgcrypt*.so* mr,
${pkgs.libgpgerror}/lib/libgpg-error*.so* mr,
${pkgs.libnghttp2}/lib/libnghttp2*.so* mr,
${pkgs.c-ares}/lib/libcares*.so* mr,
${pkgs.libcap}/lib/libcap*.so* mr,
${pkgs.attr}/lib/libattr*.so* mr,
${pkgs.xz.out}/lib/liblzma*.so* mr,
${pkgs.libgcrypt.out}/lib/libgcrypt*.so* mr,
${pkgs.libgpgerror.out}/lib/libgpg-error*.so* mr,
${pkgs.libnghttp2.out}/lib/libnghttp2*.so* mr,
${pkgs.c-ares.out}/lib/libcares*.so* mr,
${pkgs.libcap.out}/lib/libcap*.so* mr,
${pkgs.attr.out}/lib/libattr*.so* mr,
${pkgs.lz4}/lib/liblz4*.so* mr,
@{PROC}/sys/kernel/random/uuid r,

View file

@ -6,13 +6,13 @@ let
mainCfg = config.services.httpd;
httpd = mainCfg.package;
httpd = mainCfg.package.out;
version24 = !versionOlder httpd.version "2.4";
httpdConf = mainCfg.configFile;
php = pkgs.php.override { apacheHttpd = httpd; };
php = pkgs.php.override { apacheHttpd = httpd.dev; /* otherwise it only gets .out */ };
getPort = cfg: if cfg.port != 0 then cfg.port else if cfg.enableSSL then 443 else 80;

View file

@ -333,7 +333,7 @@ let
'version' => '${config.package.version}',
'openssl' => '${pkgs.openssl}/bin/openssl'
'openssl' => '${pkgs.openssl.bin}/bin/openssl'
);

View file

@ -39,7 +39,7 @@ in {
"${pkgs.diffutils}"
] ++
(if config.mercurial then ["${pkgs.mercurial}"] else []) ++
(if config.subversion then ["${pkgs.subversion}"] else []) ++
(if config.subversion then ["${pkgs.subversion.out}"] else []) ++
(if config.git then ["${pkgs.git}"] else []);
startupScript = pkgs.writeScript "activatePhabricator" ''

View file

@ -96,7 +96,7 @@ in
globalEnvVars = singleton
{ name = "PYTHONPATH";
value =
makeSearchPath "lib/${pkgs.python.libPrefix}/site-packages"
makeSearchPathOutputs "lib/${pkgs.python.libPrefix}/site-packages" ["lib"]
[ pkgs.mod_python
pkgs.pythonPackages.trac
pkgs.setuptools

View file

@ -0,0 +1,53 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.caddy;
configFile = pkgs.writeText "Caddyfile" cfg.config;
in
{
options.services.caddy = {
enable = mkEnableOption "Caddy web server";
config = mkOption {
description = "Verbatim Caddyfile to use";
};
email = mkOption {
default = "";
type = types.string;
description = "Email address (for Let's Encrypt certificate)";
};
dataDir = mkOption {
default = "/var/lib/caddy";
type = types.path;
description = "The data directory, for storing certificates.";
};
};
config = mkIf cfg.enable {
systemd.services.caddy = {
description = "Caddy web server";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.caddy}/bin/caddy -conf=${configFile} -email=${cfg.email}";
Type = "simple";
User = "caddy";
Group = "caddy";
AmbientCapabilities = "cap_net_bind_service";
};
};
users.extraUsers.caddy = {
group = "caddy";
uid = config.ids.uids.caddy;
home = cfg.dataDir;
createHome = true;
};
users.extraGroups.caddy.gid = config.ids.uids.caddy;
};
}

View file

@ -7,7 +7,7 @@ let
e = pkgs.enlightenment;
xcfg = config.services.xserver;
cfg = xcfg.desktopManager.enlightenment;
GST_PLUGIN_PATH = lib.makeSearchPath "lib/gstreamer-1.0" [
GST_PLUGIN_PATH = lib.makeSearchPathOutputs "lib/gstreamer-1.0" ["lib"] [
pkgs.gst_all_1.gst-plugins-base
pkgs.gst_all_1.gst-plugins-good
pkgs.gst_all_1.gst-plugins-bad

View file

@ -166,7 +166,7 @@ in {
};
environment.variables.GIO_EXTRA_MODULES = [ "${gnome3.dconf}/lib/gio/modules"
"${gnome3.glib_networking}/lib/gio/modules"
"${gnome3.glib_networking.out}/lib/gio/modules"
"${gnome3.gvfs}/lib/gio/modules" ];
environment.systemPackages = gnome3.corePackages ++ cfg.sessionPath
++ (removePackagesByName gnome3.optionalPackages config.environment.gnome3.excludePackages);

View file

@ -62,13 +62,13 @@ in
${config.hardware.pulseaudio.package}/bin/pactl load-module module-device-manager "do_routing=1"
''}
exec ${kde5.plasma-workspace}/bin/startkde
exec startkde
'';
};
security.setuidOwners = singleton {
program = "kcheckpass";
source = "${kde5.plasma-workspace}/lib/libexec/kcheckpass";
source = "${kde5.plasma-workspace.out}/lib/libexec/kcheckpass";
owner = "root";
group = "root";
setuid = true;
@ -171,12 +171,12 @@ in
# Enable GTK applications to load SVG icons
environment.variables = mkIf (lib.hasAttr "breeze-icons" kde5) {
GDK_PIXBUF_MODULE_FILE = "${pkgs.librsvg}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache";
GDK_PIXBUF_MODULE_FILE = "${pkgs.librsvg.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache";
};
fonts.fonts = [ (kde5.oxygen-fonts or pkgs.noto-fonts) ];
programs.ssh.askPassword = "${kde5.ksshaskpass}/bin/ksshaskpass";
programs.ssh.askPassword = "${kde5.ksshaskpass.out}/bin/ksshaskpass";
# Enable helpful DBus services.
services.udisks2.enable = true;

View file

@ -45,7 +45,7 @@ let
${optionalString cfg.startDbusSession ''
if test -z "$DBUS_SESSION_BUS_ADDRESS"; then
exec ${pkgs.dbus.tools}/bin/dbus-launch --exit-with-session "$0" "$sessionType"
exec ${pkgs.dbus.dbus-launch} --exit-with-session "$0" "$sessionType"
fi
''}
@ -55,11 +55,11 @@ let
# Start PulseAudio if enabled.
${optionalString (config.hardware.pulseaudio.enable) ''
${optionalString (!config.hardware.pulseaudio.systemWide)
"${config.hardware.pulseaudio.package}/bin/pulseaudio --start"
"${config.hardware.pulseaudio.package.out}/bin/pulseaudio --start"
}
# Publish access credentials in the root window.
${config.hardware.pulseaudio.package}/bin/pactl load-module module-x11-publish "display=$DISPLAY"
${config.hardware.pulseaudio.package.out}/bin/pactl load-module module-x11-publish "display=$DISPLAY"
''}
# Tell systemd about our $DISPLAY. This is needed by the
@ -275,7 +275,7 @@ in
};
config = {
services.xserver.displayManager.xserverBin = "${xorg.xorgserver}/bin/X";
services.xserver.displayManager.xserverBin = "${xorg.xorgserver.out}/bin/X";
};
imports = [

View file

@ -24,9 +24,9 @@ let
# This wrapper ensures that we actually get themes
makeWrapper ${pkgs.lightdm_gtk_greeter}/sbin/lightdm-gtk-greeter \
$out/greeter \
--prefix PATH : "${pkgs.glibc}/bin" \
--set GDK_PIXBUF_MODULE_FILE "${pkgs.gdk_pixbuf}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache" \
--set GTK_PATH "${theme}:${pkgs.gtk3}" \
--prefix PATH : "${pkgs.glibc.bin}/bin" \
--set GDK_PIXBUF_MODULE_FILE "${pkgs.gdk_pixbuf.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache" \
--set GTK_PATH "${theme}:${pkgs.gtk3.out}" \
--set GTK_EXE_PREFIX "${theme}" \
--set GTK_DATA_PREFIX "${theme}" \
--set XDG_DATA_DIRS "${theme}/share:${icons}/share" \

View file

@ -48,7 +48,7 @@ let
[XDisplay]
MinimumVT=${toString xcfg.tty}
ServerPath=${xserverWrapper}
XephyrPath=${pkgs.xorg.xorgserver}/bin/Xephyr
XephyrPath=${pkgs.xorg.xorgserver.out}/bin/Xephyr
SessionCommand=${dmcfg.session.script}
SessionDir=${dmcfg.session.desktops}
XauthPath=${pkgs.xorg.xauth}/bin/xauth

View file

@ -41,7 +41,7 @@ with lib;
{ description = "Terminal Server";
path =
[ pkgs.xorgserver pkgs.gawk pkgs.which pkgs.openssl pkgs.xorg.xauth
[ pkgs.xorgserver.out pkgs.gawk pkgs.which pkgs.openssl pkgs.xorg.xauth
pkgs.nettools pkgs.shadow pkgs.procps pkgs.utillinux pkgs.bash
];

View file

@ -20,7 +20,7 @@ in
services.xserver.windowManager.session = singleton
{ name = "metacity";
start = ''
env LD_LIBRARY_PATH=${xorg.libX11}/lib:${xorg.libXext}/lib:/usr/lib/
env LD_LIBRARY_PATH=${xorg.libX11.out}/lib:${xorg.libXext.out}/lib:/usr/lib/
# !!! Hack: load the schemas for Metacity.
GCONF_CONFIG_SOURCE=xml::~/.gconf ${gnome.GConf}/bin/gconftool-2 \
--makefile-install-rule ${gnome.metacity}/etc/gconf/schemas/*.schemas # */

View file

@ -219,6 +219,12 @@ in
'';
};
dpi = mkOption {
type = types.nullOr types.int;
default = null;
description = "DPI resolution to use for X server.";
};
startDbusSession = mkOption {
type = types.bool;
default = true;
@ -450,7 +456,7 @@ in
]);
environment.systemPackages =
[ xorg.xorgserver
[ xorg.xorgserver.out
xorg.xrandr
xorg.xrdb
xorg.setxkbmap
@ -460,6 +466,7 @@ in
xorg.xsetroot
xorg.xinput
xorg.xprop
xorg.xauth
pkgs.xterm
pkgs.xdg_utils
]
@ -487,7 +494,7 @@ in
XKB_BINDIR = "${xorg.xkbcomp}/bin"; # Needed for the Xkb extension.
XORG_DRI_DRIVER_PATH = "/run/opengl-driver/lib/dri"; # !!! Depends on the driver selected at runtime.
LD_LIBRARY_PATH = concatStringsSep ":" (
[ "${xorg.libX11}/lib" "${xorg.libXext}/lib" ]
[ "${xorg.libX11.out}/lib" "${xorg.libXext.out}/lib" ]
++ concatLists (catAttrs "libPath" cfg.drivers));
} // cfg.displayManager.job.environment;
@ -507,18 +514,18 @@ in
};
services.xserver.displayManager.xserverArgs =
[ "-ac"
"-terminate"
[ "-terminate"
"-config ${configFile}"
"-xkbdir" "${cfg.xkbDir}"
] ++ optional (cfg.display != null) ":${toString cfg.display}"
++ optional (cfg.tty != null) "vt${toString cfg.tty}"
++ optional (cfg.dpi != null) "-dpi ${toString cfg.dpi}"
++ optionals (cfg.display != null) [ "-logfile" "/var/log/X.${toString cfg.display}.log" ]
++ optional (!cfg.enableTCP) "-nolisten tcp";
services.xserver.modules =
concatLists (catAttrs "modules" cfg.drivers) ++
[ xorg.xorgserver
[ xorg.xorgserver.out
xorg.xf86inputevdev
];

View file

@ -12,7 +12,8 @@ let
'';
});
path =
path = map # outputs TODO?
(pkg: (pkg.bin or (pkg.out or pkg)))
[ pkgs.coreutils pkgs.gnugrep pkgs.findutils
pkgs.glibc # needed for getent
pkgs.shadow

View file

@ -50,6 +50,11 @@ with lib;
(mkIf (!config.systemd.coredump.enable) {
boot.kernel.sysctl."kernel.core_pattern" = mkDefault "core";
systemd.extraConfig =
''
DefaultLimitCORE=0:infinity
'';
})
];

View file

@ -55,10 +55,10 @@ let
version extraConfig extraPerEntryConfig extraEntries
extraEntriesBeforeNixOS extraPrepareConfig configurationLimit copyKernels timeout
default fsIdentifier efiSupport gfxmodeEfi gfxmodeBios;
path = (makeSearchPath "bin" ([
path = (makeBinPath ([
pkgs.coreutils pkgs.gnused pkgs.gnugrep pkgs.findutils pkgs.diffutils pkgs.btrfs-progs
pkgs.utillinux ] ++ (if cfg.efiSupport && (cfg.version == 2) then [pkgs.efibootmgr ] else [])
)) + ":" + (makeSearchPath "sbin" [
)) + ":" + (makeSearchPathOutputs "sbin" ["bin"] [
pkgs.mdadm pkgs.utillinux
]);
});

View file

@ -436,9 +436,9 @@ in
${optionalString luks.yubikeySupport ''
copy_bin_and_libs ${pkgs.ykpers}/bin/ykchalresp
copy_bin_and_libs ${pkgs.ykpers}/bin/ykinfo
copy_bin_and_libs ${pkgs.openssl}/bin/openssl
copy_bin_and_libs ${pkgs.openssl.bin}/bin/openssl
cc -O3 -I${pkgs.openssl}/include -L${pkgs.openssl}/lib ${./pbkdf2-sha512.c} -o pbkdf2-sha512 -lcrypto
cc -O3 -I${pkgs.openssl}/include -L${pkgs.openssl.out}/lib ${./pbkdf2-sha512.c} -o pbkdf2-sha512 -lcrypto
strip -s pbkdf2-sha512
copy_bin_and_libs pbkdf2-sha512

View file

@ -31,7 +31,6 @@ let
extraUtils = pkgs.runCommand "extra-utils"
{ buildInputs = [pkgs.nukeReferences];
allowedReferences = [ "out" ]; # prevent accidents like glibc being included in the initrd
doublePatchelf = pkgs.stdenv.isArm;
}
''
set +o pipefail
@ -80,7 +79,7 @@ let
${config.boot.initrd.extraUtilsCommands}
# Copy ld manually since it isn't detected correctly
cp -pv ${pkgs.glibc}/lib/ld*.so.? $out/lib
cp -pv ${pkgs.glibc.out}/lib/ld*.so.? $out/lib
# Copy all of the needed libraries for the binaries
for BIN in $(find $out/{bin,sbin} -type f); do
@ -111,9 +110,6 @@ let
if ! test -L $i; then
echo "patching $i..."
patchelf --set-interpreter $out/lib/ld*.so.? --set-rpath $out/lib $i || true
if [ -n "$doublePatchelf" ]; then
patchelf --set-interpreter $out/lib/ld*.so.? --set-rpath $out/lib $i || true
fi
fi
done

View file

@ -7,11 +7,14 @@ let
kernel = config.boot.kernelPackages.kernel;
activateConfiguration = config.system.activationScripts.script;
readonlyMountpoint = pkgs.runCommand "readonly-mountpoint" {} ''
mkdir -p $out/bin
cc -O3 ${./readonly-mountpoint.c} -o $out/bin/readonly-mountpoint
strip -s $out/bin/readonly-mountpoint
'';
readonlyMountpoint = pkgs.stdenv.mkDerivation {
name = "readonly-mountpoint";
unpackPhase = "true";
installPhase = ''
mkdir -p $out/bin
cc -O3 ${./readonly-mountpoint.c} -o $out/bin/readonly-mountpoint
'';
};
bootStage2 = pkgs.substituteAll {
src = ./stage-2-init.sh;

View file

@ -193,7 +193,7 @@ in rec {
path = mkOption {
default = [];
apply = ps: "${makeSearchPath "bin" ps}:${makeSearchPath "sbin" ps}";
apply = ps: "${makeBinPath ps}:${makeSearchPathOutputs "sbin" ["bin"] ps}";
description = ''
Packages added to the service's <envar>PATH</envar>
environment variable. Both the <filename>bin</filename>

View file

@ -472,6 +472,13 @@ in
'';
};
systemd.generator-packages = mkOption {
default = [];
type = types.listOf types.package;
example = literalExample "[ pkgs.systemd-cryptsetup-generator ]";
description = "Packages providing systemd generators.";
};
systemd.defaultUnit = mkOption {
default = "multi-user.target";
type = types.str;
@ -628,7 +635,18 @@ in
environment.systemPackages = [ systemd ];
environment.etc = {
environment.etc = let
# generate contents for /etc/systemd/system-generators from
# systemd.generators and systemd.generator-packages
generators = pkgs.runCommand "system-generators" { packages = cfg.generator-packages; } ''
mkdir -p $out
for package in $packages
do
ln -s $package/lib/systemd/system-generators/* $out/
done;
${concatStrings (mapAttrsToList (generator: target: "ln -s ${target} $out/${generator};\n") cfg.generators)}
'';
in ({
"systemd/system".source = generateUnits "system" cfg.units upstreamSystemUnits upstreamSystemWants;
"systemd/user".source = generateUnits "user" cfg.user.units upstreamUserUnits [];
@ -667,7 +685,9 @@ in
${concatStringsSep "\n" cfg.tmpfiles.rules}
'';
} // mapAttrs' (n: v: nameValuePair "systemd/system-generators/${n}" {"source"=v;}) cfg.generators;
"systemd/system-generators" = { source = generators; };
});
system.activationScripts.systemd = stringAfter [ "groups" ]
''

Some files were not shown because too many files have changed in this diff Show more