Merge remote-tracking branch 'upstream/master' into feature/js-unknown-ghcjs

This commit is contained in:
John Ericson 2019-09-02 01:31:31 -04:00
commit c33d80c071
12702 changed files with 456501 additions and 349462 deletions

37
.github/CODEOWNERS vendored
View file

@ -47,8 +47,8 @@
/nixos/doc/manual/man-nixos-option.xml @nbp /nixos/doc/manual/man-nixos-option.xml @nbp
/nixos/modules/installer/tools/nixos-option.sh @nbp /nixos/modules/installer/tools/nixos-option.sh @nbp
# NixOS modules # New NixOS modules
/nixos/modules @Infinisil /nixos/modules/module-list.nix @Infinisil
# Python-related code and docs # Python-related code and docs
/maintainers/scripts/update-python-libraries @FRidh /maintainers/scripts/update-python-libraries @FRidh
@ -58,11 +58,11 @@
/doc/languages-frameworks/python.section.md @FRidh /doc/languages-frameworks/python.section.md @FRidh
# Haskell # Haskell
/pkgs/development/compilers/ghc @peti @ryantm @basvandijk /pkgs/development/compilers/ghc @basvandijk
/pkgs/development/haskell-modules @peti @ryantm @basvandijk /pkgs/development/haskell-modules @basvandijk
/pkgs/development/haskell-modules/default.nix @peti @ryantm @basvandijk /pkgs/development/haskell-modules/default.nix @basvandijk
/pkgs/development/haskell-modules/generic-builder.nix @peti @ryantm @basvandijk /pkgs/development/haskell-modules/generic-builder.nix @basvandijk
/pkgs/development/haskell-modules/hoogle.nix @peti @ryantm @basvandijk /pkgs/development/haskell-modules/hoogle.nix @basvandijk
# Perl # Perl
/pkgs/development/interpreters/perl @volth /pkgs/development/interpreters/perl @volth
@ -107,8 +107,8 @@
# Eclipse # Eclipse
/pkgs/applications/editors/eclipse @rycee /pkgs/applications/editors/eclipse @rycee
# https://github.com/NixOS/nixpkgs/issues/31401 # Licenses
/lib/licenses.nix @ghost /lib/licenses.nix @alyssais
# Qt / KDE # Qt / KDE
/pkgs/applications/kde @ttuegel /pkgs/applications/kde @ttuegel
@ -122,6 +122,14 @@
/nixos/modules/services/databases/postgresql.nix @thoughtpolice /nixos/modules/services/databases/postgresql.nix @thoughtpolice
/nixos/tests/postgresql.nix @thoughtpolice /nixos/tests/postgresql.nix @thoughtpolice
# Hardened profile & related modules
/nixos/modules/profiles/hardened.nix @joachifm
/nixos/modules/security/hidepid.nix @joachifm
/nixos/modules/security/lock-kernel-modules.nix @joachifm
/nixos/modules/security/misc.nix @joachifm
/nixos/tests/hardened.nix @joachifm
/pkgs/os-specific/linux/kernel/hardened-config.nix @joachifm
# Dhall # Dhall
/pkgs/development/dhall-modules @Gabriel439 @Profpatsch /pkgs/development/dhall-modules @Gabriel439 @Profpatsch
/pkgs/development/interpreters/dhall @Gabriel439 @Profpatsch /pkgs/development/interpreters/dhall @Gabriel439 @Profpatsch
@ -131,3 +139,14 @@
# Bazel # Bazel
/pkgs/development/tools/build-managers/bazel @mboes @Profpatsch /pkgs/development/tools/build-managers/bazel @mboes @Profpatsch
# NixOS modules for e-mail and dns services
/nixos/modules/services/mail/mailman.nix @peti
/nixos/modules/services/mail/postfix.nix @peti
/nixos/modules/services/networking/bind.nix @peti
/nixos/modules/services/mail/rspamd.nix @peti
# Emacs
/pkgs/applications/editors/emacs-modes @adisbladis
/pkgs/applications/editors/emacs @adisbladis
/pkgs/top-level/emacs-packages.nix @adisbladis

View file

@ -8,5 +8,4 @@
## Technical details ## Technical details
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the Please run `nix run nixpkgs.nix-info -c nix-info -m` and paste the result.
results.

37
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View file

@ -0,0 +1,37 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: '0.kind: bug'
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. ...
2. ...
3. ...
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
**Metadata**
Please run `nix run nixpkgs.nix-info -c nix-info -m` and paste the result.
Maintainer information:
```yaml
# a list of nixpkgs attributes affected by the problem
attribute:
# a list of nixos modules affected by the problem
module:
```

View file

@ -0,0 +1,18 @@
---
name: Packaging requests
about: For packages that are missing
title: ''
labels: '0.kind: packaging request'
assignees: ''
---
**Project description**
_describe the project a little_
**Metadata**
* homepage URL:
* source URL:
* license: mit, bsd, gpl2+ , ...
* platforms: unix, linux, darwin, ...

View file

@ -1,3 +1,4 @@
<!-- Nixpkgs has a lot of new incoming Pull Requests, but not enough people to review this constant stream. Even if you aren't a committer, we would appreciate reviews of other PRs, especially simple ones like package updates. Just testing the relevant package/service and leaving a comment saying what you tested, how you tested it and whether it worked would be great. List of open PRs: <https://github.com/NixOS/nixpkgs/pulls>, for more about reviewing contributions: <https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#sec-reviewing-contributions>. Reviewing isn't mandatory, but it would help out a lot and reduce the average time-to-merge for all of us. Thanks a lot if you do! -->
###### Motivation for this change ###### Motivation for this change
@ -11,11 +12,12 @@
- [ ] macOS - [ ] macOS
- [ ] other Linux distributions - [ ] other Linux distributions
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests)) - [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"` - [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nix-review --run "nix-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`) - [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Determined the impact on package closure size (by running `nix path-info -S` before and after) - [ ] Determined the impact on package closure size (by running `nix path-info -S` before and after)
- [ ] Assured whether relevant documentation is up to date - [ ] Ensured that relevant documentation is up to date
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md). - [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).
--- ###### Notify maintainers
cc @

View file

@ -1 +1 @@
19.03 19.09

View file

@ -1,4 +1,4 @@
Copyright (c) 2003-2018 Eelco Dolstra and the Nixpkgs/NixOS contributors Copyright (c) 2003-2019 Eelco Dolstra and the Nixpkgs/NixOS contributors
Permission is hereby granted, free of charge, to any person obtaining Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the a copy of this software and associated documentation files (the

View file

@ -1,6 +1,7 @@
[<img src="https://nixos.org/logo/nixos-hires.png" width="500px" alt="logo" />](https://nixos.org/nixos) [<img src="https://nixos.org/logo/nixos-hires.png" width="500px" alt="logo" />](https://nixos.org/nixos)
[![Code Triagers Badge](https://www.codetriage.com/nixos/nixpkgs/badges/users.svg)](https://www.codetriage.com/nixos/nixpkgs) [![Code Triagers Badge](https://www.codetriage.com/nixos/nixpkgs/badges/users.svg)](https://www.codetriage.com/nixos/nixpkgs)
[![Open Collective supporters](https://opencollective.com/nixos/tiers/supporter/badge.svg?label=Supporter&color=brightgreen)](https://opencollective.com/nixos)
Nixpkgs is a collection of packages for the [Nix](https://nixos.org/nix/) package Nixpkgs is a collection of packages for the [Nix](https://nixos.org/nix/) package
manager. It is periodically built and tested by the [Hydra](https://hydra.nixos.org/) manager. It is periodically built and tested by the [Hydra](https://hydra.nixos.org/)
@ -12,12 +13,12 @@ build daemon as so-called channels. To get channel information via git, add
``` ```
For stability and maximum binary package support, it is recommended to maintain For stability and maximum binary package support, it is recommended to maintain
custom changes on top of one of the channels, e.g. `nixos-18.09` for the latest custom changes on top of one of the channels, e.g. `nixos-19.03` for the latest
release and `nixos-unstable` for the latest successful build of master: release and `nixos-unstable` for the latest successful build of master:
``` ```
% git remote update channels % git remote update channels
% git rebase channels/nixos-18.09 % git rebase channels/nixos-19.03
``` ```
For pull requests, please rebase onto nixpkgs `master`. For pull requests, please rebase onto nixpkgs `master`.
@ -31,9 +32,9 @@ For pull requests, please rebase onto nixpkgs `master`.
* [Manual (NixOS)](https://nixos.org/nixos/manual/) * [Manual (NixOS)](https://nixos.org/nixos/manual/)
* [Community maintained wiki](https://nixos.wiki/) * [Community maintained wiki](https://nixos.wiki/)
* [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined) * [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined)
* [Continuous package builds for 18.09 release](https://hydra.nixos.org/jobset/nixos/release-18.09) * [Continuous package builds for 19.03 release](https://hydra.nixos.org/jobset/nixos/release-19.03)
* [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents) * [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents)
* [Tests for 18.09 release](https://hydra.nixos.org/job/nixos/release-18.09/tested#tabs-constituents) * [Tests for 19.03 release](https://hydra.nixos.org/job/nixos/release-19.03/tested#tabs-constituents)
Communication: Communication:

7
doc/.gitignore vendored
View file

@ -1,7 +1,8 @@
*.chapter.xml *.chapter.xml
*.section.xml *.section.xml
.version .version
out functions/library/generated
manual-full.xml
highlightjs
functions/library/locations.xml functions/library/locations.xml
highlightjs
manual-full.xml
out

View file

@ -8,10 +8,10 @@ debug:
nix-shell --run "xmloscopy --docbook5 ./manual.xml ./manual-full.xml" nix-shell --run "xmloscopy --docbook5 ./manual.xml ./manual-full.xml"
.PHONY: format .PHONY: format
format: format: doc-support/result
find . -iname '*.xml' -type f | while read f; do \ find . -iname '*.xml' -type f | while read f; do \
echo $$f ;\ echo $$f ;\
xmlformat --config-file "$$XMLFORMAT_CONFIG" -i $$f ;\ xmlformat --config-file "doc-support/result/xmlformat.conf" -i $$f ;\
done done
.PHONY: fix-misc-xml .PHONY: fix-misc-xml
@ -21,19 +21,19 @@ fix-misc-xml:
.PHONY: clean .PHONY: clean
clean: clean:
rm -f ${MD_TARGETS} .version manual-full.xml functions/library/locations.xml functions/library/generated rm -f ${MD_TARGETS} doc-support/result .version manual-full.xml functions/library/locations.xml functions/library/generated
rm -rf ./out/ ./highlightjs rm -rf ./out/ ./highlightjs
.PHONY: validate .PHONY: validate
validate: manual-full.xml validate: manual-full.xml doc-support/result
jing "$$RNG" manual-full.xml jing doc-support/result/docbook.rng manual-full.xml
out/html/index.html: manual-full.xml style.css highlightjs out/html/index.html: doc-support/result manual-full.xml style.css highlightjs
mkdir -p out/html mkdir -p out/html
xsltproc ${xsltFlags} \ xsltproc \
--nonet --xinclude \ --nonet --xinclude \
--output $@ \ --output $@ \
"$$XSL/docbook/xhtml/docbook.xsl" \ doc-support/result/xhtml.xsl \
./manual-full.xml ./manual-full.xml
mkdir -p out/html/highlightjs/ mkdir -p out/html/highlightjs/
@ -43,50 +43,48 @@ out/html/index.html: manual-full.xml style.css highlightjs
cp ./style.css out/html/style.css cp ./style.css out/html/style.css
mkdir -p out/html/images/callouts mkdir -p out/html/images/callouts
cp "$$XSL/docbook/images/callouts/"*.svg out/html/images/callouts/ cp doc-support/result/xsl/docbook/images/callouts/*.svg out/html/images/callouts/
chmod u+w -R out/html/ chmod u+w -R out/html/
out/epub/manual.epub: manual-full.xml out/epub/manual.epub: manual-full.xml
mkdir -p out/epub/scratch mkdir -p out/epub/scratch
xsltproc ${xsltFlags} --nonet \ xsltproc --nonet \
--output out/epub/scratch/ \ --output out/epub/scratch/ \
"$$XSL/docbook/epub/docbook.xsl" \ doc-support/result/epub.xsl \
./manual-full.xml ./manual-full.xml
cp ./overrides.css out/epub/scratch/OEBPS cp ./overrides.css out/epub/scratch/OEBPS
cp ./style.css out/epub/scratch/OEBPS cp ./style.css out/epub/scratch/OEBPS
mkdir -p out/epub/scratch/OEBPS/images/callouts/ mkdir -p out/epub/scratch/OEBPS/images/callouts/
cp "$$XSL/docbook/images/callouts/"*.svg out/epub/scratch/OEBPS/images/callouts/ cp doc-support/result/xsl/docbook/images/callouts/*.svg out/epub/scratch/OEBPS/images/callouts/
echo "application/epub+zip" > mimetype echo "application/epub+zip" > mimetype
zip -0Xq "out/epub/manual.epub" mimetype zip -0Xq "out/epub/manual.epub" mimetype
rm mimetype rm mimetype
cd "out/epub/scratch/" && zip -Xr9D "../manual.epub" * cd "out/epub/scratch/" && zip -Xr9D "../manual.epub" *
rm -rf "out/epub/scratch/" rm -rf "out/epub/scratch/"
highlightjs: highlightjs: doc-support/result
mkdir -p highlightjs mkdir -p highlightjs
cp -r "$$HIGHLIGHTJS/highlight.pack.js" highlightjs/ cp -r doc-support/result/highlightjs/highlight.pack.js highlightjs/
cp -r "$$HIGHLIGHTJS/LICENSE" highlightjs/ cp -r doc-support/result/highlightjs/LICENSE highlightjs/
cp -r "$$HIGHLIGHTJS/mono-blue.css" highlightjs/ cp -r doc-support/result/highlightjs/mono-blue.css highlightjs/
cp -r "$$HIGHLIGHTJS/loader.js" highlightjs/ cp -r doc-support/result/highlightjs/loader.js highlightjs/
manual-full.xml: ${MD_TARGETS} .version functions/library/locations.xml functions/library/generated *.xml **/*.xml **/**/*.xml manual-full.xml: ${MD_TARGETS} .version functions/library/locations.xml functions/library/generated *.xml **/*.xml **/**/*.xml
xmllint --nonet --xinclude --noxincludenode manual.xml --output manual-full.xml xmllint --nonet --xinclude --noxincludenode manual.xml --output manual-full.xml
.version: .version: doc-support/result
nix-instantiate --eval \ ln -rfs ./doc-support/result/version .version
-E '(import ../lib).version' > .version
function_locations := $(shell nix-build --no-out-link ./lib-function-locations.nix) doc-support/result: doc-support/default.nix
(cd doc-support; nix-build)
functions/library/locations.xml: functions/library/locations.xml: doc-support/result
ln -s $(function_locations) ./functions/library/locations.xml ln -rfs ./doc-support/result/function-locations.xml functions/library/locations.xml
functions/library/generated: functions/library/generated: doc-support/result
nix-build ./lib-function-docs.nix \ ln -rfs ./doc-support/result/function-docs functions/library/generated
--arg locationsXml $(function_locations)\
--out-link ./functions/library/generated
%.section.xml: %.section.md %.section.xml: %.section.md
pandoc $^ -w docbook+smart \ pandoc $^ -w docbook+smart \

View file

@ -197,20 +197,14 @@ args.stdenv.mkDerivation (args // {
<title>Package naming</title> <title>Package naming</title>
<para> <para>
The key words The key words <emphasis>must</emphasis>, <emphasis>must not</emphasis>,
<emphasis>must</emphasis>, <emphasis>required</emphasis>, <emphasis>shall</emphasis>, <emphasis>shall
<emphasis>must not</emphasis>, not</emphasis>, <emphasis>should</emphasis>, <emphasis>should
<emphasis>required</emphasis>, not</emphasis>, <emphasis>recommended</emphasis>, <emphasis>may</emphasis>,
<emphasis>shall</emphasis>, and <emphasis>optional</emphasis> in this section are to be interpreted as
<emphasis>shall not</emphasis>, described in <link xlink:href="https://tools.ietf.org/html/rfc2119">RFC
<emphasis>should</emphasis>, 2119</link>. Only <emphasis>emphasized</emphasis> words are to be
<emphasis>should not</emphasis>, interpreted in this way.
<emphasis>recommended</emphasis>,
<emphasis>may</emphasis>,
and <emphasis>optional</emphasis> in this section
are to be interpreted as described in
<link xlink:href="https://tools.ietf.org/html/rfc2119">RFC 2119</link>.
Only <emphasis>emphasized</emphasis> words are to be interpreted in this way.
</para> </para>
<para> <para>
@ -253,15 +247,15 @@ args.stdenv.mkDerivation (args // {
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
The <literal>name</literal> attribute <emphasis>should</emphasis> The <literal>name</literal> attribute <emphasis>should</emphasis> be
be identical to the upstream package name. identical to the upstream package name.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The <literal>name</literal> attribute <emphasis>must not</emphasis> The <literal>name</literal> attribute <emphasis>must not</emphasis>
contain uppercase letters — e.g., <literal>"mplayer-1.0rc2"</literal> contain uppercase letters — e.g., <literal>"mplayer-1.0rc2"</literal>
instead of <literal>"MPlayer-1.0rc2"</literal>. instead of <literal>"MPlayer-1.0rc2"</literal>.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -275,28 +269,29 @@ args.stdenv.mkDerivation (args // {
<para> <para>
If a package is not a release but a commit from a repository, then the If a package is not a release but a commit from a repository, then the
version part of the name <emphasis>must</emphasis> be the date of that version part of the name <emphasis>must</emphasis> be the date of that
(fetched) commit. The date <emphasis>must</emphasis> be in <literal>"YYYY-MM-DD"</literal> (fetched) commit. The date <emphasis>must</emphasis> be in
format. Also append <literal>"unstable"</literal> to the name - e.g., <literal>"YYYY-MM-DD"</literal> format. Also append
<literal>"unstable"</literal> to the name - e.g.,
<literal>"pkgname-unstable-2014-09-23"</literal>. <literal>"pkgname-unstable-2014-09-23"</literal>.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Dashes in the package name <emphasis>should</emphasis> be preserved in new variable names, Dashes in the package name <emphasis>should</emphasis> be preserved in
rather than converted to underscores or camel cased — e.g., new variable names, rather than converted to underscores or camel cased
<varname>http-parser</varname> instead of <varname>http_parser</varname> — e.g., <varname>http-parser</varname> instead of
or <varname>httpParser</varname>. The hyphenated style is preferred in <varname>http_parser</varname> or <varname>httpParser</varname>. The
all three package names. hyphenated style is preferred in all three package names.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
If there are multiple versions of a package, this <emphasis>should</emphasis> be reflected in If there are multiple versions of a package, this
the variable names in <filename>all-packages.nix</filename>, e.g. <emphasis>should</emphasis> be reflected in the variable names in
<varname>json-c-0-9</varname> and <varname>json-c-0-11</varname>. If <filename>all-packages.nix</filename>, e.g. <varname>json-c-0-9</varname>
there is an obvious “default” version, make an attribute like and <varname>json-c-0-11</varname>. If there is an obvious “default”
<literal>json-c = json-c-0-9;</literal>. See also version, make an attribute like <literal>json-c = json-c-0-9;</literal>.
<xref linkend="sec-versioning" /> See also <xref linkend="sec-versioning" />
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -814,8 +809,8 @@ args.stdenv.mkDerivation (args // {
<para> <para>
There are multiple ways to fetch a package source in nixpkgs. The general There are multiple ways to fetch a package source in nixpkgs. The general
guideline is that you should package reproducible sources with a high degree of guideline is that you should package reproducible sources with a high degree
availability. Right now there is only one fetcher which has mirroring of availability. Right now there is only one fetcher which has mirroring
support and that is <literal>fetchurl</literal>. Note that you should also support and that is <literal>fetchurl</literal>. Note that you should also
prefer protocols which have a corresponding proxy environment variable. prefer protocols which have a corresponding proxy environment variable.
</para> </para>
@ -869,8 +864,10 @@ src = fetchFromGitHub {
} }
</programlisting> </programlisting>
Find the value to put as <literal>sha256</literal> by running Find the value to put as <literal>sha256</literal> by running
<literal>nix run -f '&lt;nixpkgs&gt;' nix-prefetch-github -c nix-prefetch-github --rev 1f795f9f44607cc5bec70d1300150bfefcef2aae NixOS nix</literal> <literal>nix run -f '&lt;nixpkgs&gt;' nix-prefetch-github -c
or <literal>nix-prefetch-url --unpack https://github.com/NixOS/nix/archive/1f795f9f44607cc5bec70d1300150bfefcef2aae.tar.gz</literal>. nix-prefetch-github --rev 1f795f9f44607cc5bec70d1300150bfefcef2aae NixOS
nix</literal> or <literal>nix-prefetch-url --unpack
https://github.com/NixOS/nix/archive/1f795f9f44607cc5bec70d1300150bfefcef2aae.tar.gz</literal>.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -924,7 +921,7 @@ src = fetchFromGitHub {
<para> <para>
You can convert between formats with nix-hash, for example: You can convert between formats with nix-hash, for example:
<screen> <screen>
$ nix-hash --type sha256 --to-base32 <replaceable>HASH</replaceable> <prompt>$ </prompt>nix-hash --type sha256 --to-base32 <replaceable>HASH</replaceable>
</screen> </screen>
</para> </para>
</listitem> </listitem>
@ -953,17 +950,23 @@ $ nix-hash --type sha256 --to-base32 <replaceable>HASH</replaceable>
would be replace hash with a fake one and rebuild. Nix build will fail and would be replace hash with a fake one and rebuild. Nix build will fail and
error message will contain desired hash. error message will contain desired hash.
</para> </para>
<warning><para>This method has security problems. Check below for details.</para></warning> <warning>
<para>
This method has security problems. Check below for details.
</para>
</warning>
</listitem> </listitem>
</orderedlist> </orderedlist>
<section xml:id="sec-source-hashes-security"> <section xml:id="sec-source-hashes-security">
<title>Obtaining hashes securely</title> <title>Obtaining hashes securely</title>
<para> <para>
Let's say Man-in-the-Middle (MITM) sits close to your network. Then instead of fetching Let's say Man-in-the-Middle (MITM) sits close to your network. Then instead
source you can fetch malware, and instead of source hash you get hash of malware. Here are of fetching source you can fetch malware, and instead of source hash you
security considerations for this scenario: get hash of malware. Here are security considerations for this scenario:
</para> </para>
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
@ -972,7 +975,8 @@ $ nix-hash --type sha256 --to-base32 <replaceable>HASH</replaceable>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
hashes from upstream (in method 3) should be obtained via secure protocol; hashes from upstream (in method 3) should be obtained via secure
protocol;
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -982,12 +986,12 @@ $ nix-hash --type sha256 --to-base32 <replaceable>HASH</replaceable>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
<literal>https://</literal> URLs are not secure in method 5. When obtaining hashes <literal>https://</literal> URLs are not secure in method 5. When
with fake hash method, TLS checks are disabled. So obtaining hashes with fake hash method, TLS checks are disabled. So
refetch source hash from several different networks to exclude MITM scenario. refetch source hash from several different networks to exclude MITM
Alternatively, use fake hash method to make Nix error, but instead of extracting scenario. Alternatively, use fake hash method to make Nix error, but
hash from error, extract <literal>https://</literal> URL and prefetch it instead of extracting hash from error, extract
with method 1. <literal>https://</literal> URL and prefetch it with method 1.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -1034,7 +1038,7 @@ patches = [ ./0001-changes.patch ];
<para> <para>
Move to the root directory of the source code you're patching. Move to the root directory of the source code you're patching.
<screen> <screen>
$ cd the/program/source</screen> <prompt>$ </prompt>cd the/program/source</screen>
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -1042,8 +1046,8 @@ $ cd the/program/source</screen>
If a git repository is not already present, create one and stage all of If a git repository is not already present, create one and stage all of
the source files. the source files.
<screen> <screen>
$ git init <prompt>$ </prompt>git init
$ git add .</screen> <prompt>$ </prompt>git add .</screen>
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -1056,7 +1060,7 @@ $ git add .</screen>
<para> <para>
Use git to create a diff, and pipe the output to a patch file: Use git to create a diff, and pipe the output to a patch file:
<screen> <screen>
$ git diff > nixpkgs/pkgs/the/package/0001-changes.patch</screen> <prompt>$ </prompt>git diff > nixpkgs/pkgs/the/package/0001-changes.patch</screen>
</para> </para>
</listitem> </listitem>
</orderedlist> </orderedlist>

View file

@ -132,13 +132,13 @@
</itemizedlist> </itemizedlist>
<para> <para>
The difference between a package being unsupported on some system and The difference between a package being unsupported on some system and being
being broken is admittedly a bit fuzzy. If a program broken is admittedly a bit fuzzy. If a program <emphasis>ought</emphasis> to
<emphasis>ought</emphasis> to work on a certain platform, but doesn't, the work on a certain platform, but doesn't, the platform should be included in
platform should be included in <literal>meta.platforms</literal>, but marked <literal>meta.platforms</literal>, but marked as broken with e.g.
as broken with e.g. <literal>meta.broken = <literal>meta.broken = !hostPlatform.isWindows</literal>. Of course, this
!hostPlatform.isWindows</literal>. Of course, this begs the question of what begs the question of what "ought" means exactly. That is left to the package
"ought" means exactly. That is left to the package maintainer. maintainer.
</para> </para>
</section> </section>
<section xml:id="sec-allow-unfree"> <section xml:id="sec-allow-unfree">
@ -175,9 +175,8 @@
</programlisting> </programlisting>
</para> </para>
<para> <para>
For a more useful example, try the following. This configuration For a more useful example, try the following. This configuration only
only allows unfree packages named flash player and visual studio allows unfree packages named flash player and visual studio code:
code:
<programlisting> <programlisting>
{ {
allowUnfreePredicate = (pkg: builtins.elem allowUnfreePredicate = (pkg: builtins.elem

View file

@ -12,9 +12,9 @@ xlink:href="https://github.com/NixOS/nixpkgs/tree/master/doc">doc</filename>
You can quickly check your edits with <command>make</command>: You can quickly check your edits with <command>make</command>:
</para> </para>
<screen> <screen>
$ cd /path/to/nixpkgs/doc <prompt>$ </prompt>cd /path/to/nixpkgs/doc
$ nix-shell <prompt>$ </prompt>nix-shell
[nix-shell]$ make <prompt>[nix-shell]$ </prompt>make
</screen> </screen>
<para> <para>
If you experience problems, run <command>make debug</command> to help If you experience problems, run <command>make debug</command> to help
@ -24,10 +24,10 @@ xlink:href="https://github.com/NixOS/nixpkgs/tree/master/doc">doc</filename>
After making modifications to the manual, it's important to build it before After making modifications to the manual, it's important to build it before
committing. You can do that as follows: committing. You can do that as follows:
<screen> <screen>
$ cd /path/to/nixpkgs/doc <prompt>$ </prompt>cd /path/to/nixpkgs/doc
$ nix-shell <prompt>$ </prompt>nix-shell
[nix-shell]$ make clean <prompt>[nix-shell]$ </prompt>make clean
[nix-shell]$ nix-build . <prompt>[nix-shell]$ </prompt>nix-build .
</screen> </screen>
If the build succeeds, the manual will be in If the build succeeds, the manual will be in
<filename>./result/share/doc/nixpkgs/manual.html</filename>. <filename>./result/share/doc/nixpkgs/manual.html</filename>.

View file

@ -6,17 +6,18 @@
<title>Introduction</title> <title>Introduction</title>
<para> <para>
"Cross-compilation" means compiling a program on one machine for another type "Cross-compilation" means compiling a program on one machine for another
of machine. For example, a typical use of cross-compilation is to compile type of machine. For example, a typical use of cross-compilation is to
programs for embedded devices. These devices often don't have the computing compile programs for embedded devices. These devices often don't have the
power and memory to compile their own programs. One might think that computing power and memory to compile their own programs. One might think
cross-compilation is a fairly niche concern. However, there are significant that cross-compilation is a fairly niche concern. However, there are
advantages to rigorously distinguishing between build-time and run-time significant advantages to rigorously distinguishing between build-time and
environments! This applies even when one is developing and deploying on the run-time environments! Significant, because the benefits apply even when one
same machine. Nixpkgs is increasingly adopting the opinion that packages is developing and deploying on the same machine. Nixpkgs is increasingly
should be written with cross-compilation in mind, and nixpkgs should evaluate adopting the opinion that packages should be written with cross-compilation
in a similar way (by minimizing cross-compilation-specific special cases) in mind, and nixpkgs should evaluate in a similar way (by minimizing
whether or not one is cross-compiling. cross-compilation-specific special cases) whether or not one is
cross-compiling.
</para> </para>
<para> <para>
@ -30,19 +31,20 @@
<section xml:id="sec-cross-packaging"> <section xml:id="sec-cross-packaging">
<title>Packaging in a cross-friendly manner</title> <title>Packaging in a cross-friendly manner</title>
<section xml:id="sec-cross-platform-parameters"> <section xml:id="ssec-cross-platform-parameters">
<title>Platform parameters</title> <title>Platform parameters</title>
<para> <para>
Nixpkgs follows the <link Nixpkgs follows the
<link
xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html">conventions xlink:href="https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html">conventions
of GNU autoconf</link>. We distinguish between 3 types of platforms when of GNU autoconf</link>. We distinguish between 3 types of platforms when
building a derivation: <wordasword>build</wordasword>, building a derivation: <wordasword>build</wordasword>,
<wordasword>host</wordasword>, and <wordasword>target</wordasword>. In <wordasword>host</wordasword>, and <wordasword>target</wordasword>. In
summary, <wordasword>build</wordasword> is the platform on which a package summary, <wordasword>build</wordasword> is the platform on which a package
is being built, <wordasword>host</wordasword> is the platform on which it is being built, <wordasword>host</wordasword> is the platform on which it
will run. The third attribute, <wordasword>target</wordasword>, is relevant will run. The third attribute, <wordasword>target</wordasword>, is relevant
only for certain specific compilers and build tools. only for certain specific compilers and build tools.
</para> </para>
<para> <para>
@ -95,10 +97,10 @@
The build process of certain compilers is written in such a way that the The build process of certain compilers is written in such a way that the
compiler resulting from a single build can itself only produce binaries compiler resulting from a single build can itself only produce binaries
for a single platform. The task of specifying this single "target for a single platform. The task of specifying this single "target
platform" is thus pushed to build time of the compiler. The root cause of platform" is thus pushed to build time of the compiler. The root cause
this that the compiler (which will be run on the host) and the standard of this is that the compiler (which will be run on the host) and the
library/runtime (which will be run on the target) are built by a single standard library/runtime (which will be run on the target) are built by
build process. a single build process.
</para> </para>
<para> <para>
There is no fundamental need to think about a single target ahead of There is no fundamental need to think about a single target ahead of
@ -136,9 +138,9 @@
This is a two-component shorthand for the platform. Examples of this This is a two-component shorthand for the platform. Examples of this
would be "x86_64-darwin" and "i686-linux"; see would be "x86_64-darwin" and "i686-linux"; see
<literal>lib.systems.doubles</literal> for more. The first component <literal>lib.systems.doubles</literal> for more. The first component
corresponds to the CPU architecture of the platform and the second to the corresponds to the CPU architecture of the platform and the second to
operating system of the platform (<literal>[cpu]-[os]</literal>). This the operating system of the platform (<literal>[cpu]-[os]</literal>).
format has built-in support in Nix, such as the This format has built-in support in Nix, such as the
<varname>builtins.currentSystem</varname> impure string. <varname>builtins.currentSystem</varname> impure string.
</para> </para>
</listitem> </listitem>
@ -149,14 +151,14 @@
</term> </term>
<listitem> <listitem>
<para> <para>
This is a 3- or 4- component shorthand for the platform. Examples of this This is a 3- or 4- component shorthand for the platform. Examples of
would be <literal>x86_64-unknown-linux-gnu</literal> and this would be <literal>x86_64-unknown-linux-gnu</literal> and
<literal>aarch64-apple-darwin14</literal>. This is a standard format <literal>aarch64-apple-darwin14</literal>. This is a standard format
called the "LLVM target triple", as they are pioneered by LLVM. In the called the "LLVM target triple", as they are pioneered by LLVM. In the
4-part form, this corresponds to 4-part form, this corresponds to
<literal>[cpu]-[vendor]-[os]-[abi]</literal>. This format is strictly <literal>[cpu]-[vendor]-[os]-[abi]</literal>. This format is strictly
more informative than the "Nix host double", as the previous format could more informative than the "Nix host double", as the previous format
analogously be termed. This needs a better name than could analogously be termed. This needs a better name than
<varname>config</varname>! <varname>config</varname>!
</para> </para>
</listitem> </listitem>
@ -167,11 +169,10 @@
</term> </term>
<listitem> <listitem>
<para> <para>
This is a Nix representation of a parsed LLVM target triple This is a Nix representation of a parsed LLVM target triple with
with white-listed components. This can be specified directly, white-listed components. This can be specified directly, or actually
or actually parsed from the <varname>config</varname>. See parsed from the <varname>config</varname>. See
<literal>lib.systems.parse</literal> for the exact <literal>lib.systems.parse</literal> for the exact representation.
representation.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -218,8 +219,20 @@
</variablelist> </variablelist>
</section> </section>
<section xml:id="sec-cross-specifying-dependencies"> <section xml:id="ssec-cross-dependency-categorization">
<title>Specifying Dependencies</title> <title>Theory of dependency categorization</title>
<note>
<para>
This is a rather philosophical description that isn't very
Nixpkgs-specific. For an overview of all the relevant attributes given to
<varname>mkDerivation</varname>, see
<xref
linkend="ssec-stdenv-dependencies"/>. For a description of how
everything is implemented, see
<xref linkend="ssec-cross-dependency-implementation" />.
</para>
</note>
<para> <para>
In this section we explore the relationship between both runtime and In this section we explore the relationship between both runtime and
@ -227,83 +240,98 @@
</para> </para>
<para> <para>
A runtime dependency between 2 packages implies that between them both the A run time dependency between two packages requires that their host
host and target platforms match. This is directly implied by the meaning of platforms match. This is directly implied by the meaning of "host platform"
"host platform" and "runtime dependency": The package dependency exists and "runtime dependency": The package dependency exists while both packages
while both packages are running on a single host platform. are running on a single host platform.
</para> </para>
<para> <para>
A build time dependency, however, implies a shift in platforms between the A build time dependency, however, has a shift in platforms between the
depending package and the depended-on package. The meaning of a build time depending package and the depended-on package. "build time dependency"
dependency is that to build the depending package we need to be able to run means that to build the depending package we need to be able to run the
the depended-on's package. The depending package's build platform is depended-on's package. The depending package's build platform is therefore
therefore equal to the depended-on package's host platform. Analogously, equal to the depended-on package's host platform.
the depending package's host platform is equal to the depended-on package's
target platform.
</para> </para>
<para> <para>
In this manner, given the 3 platforms for one package, we can determine the If both the dependency and depending packages aren't compilers or other
three platforms for all its transitive dependencies. This is the most machine-code-producing tools, we're done. And indeed
important guiding principle behind cross-compilation with Nixpkgs, and will <varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>
be called the <wordasword>sliding window principle</wordasword>. have covered these simpler build-time and run-time (respectively) changes
for many years. But if the dependency does produce machine code, we might
need to worry about its target platform too. In principle, that target
platform might be any of the depending package's build, host, or target
platforms, but we prohibit dependencies from a "later" platform to an
earlier platform to limit confusion because we've never seen a legitimate
use for them.
</para> </para>
<para> <para>
Some examples will make this clearer. If a package is being built with a Finally, if the depending package is a compiler or other
<literal>(build, host, target)</literal> platform triple of <literal>(foo, machine-code-producing tool, it might need dependencies that run at "emit
bar, bar)</literal>, then its build-time dependencies would have a triple of time". This is for compilers that (regrettably) insist on being built
<literal>(foo, foo, bar)</literal>, and <emphasis>those packages'</emphasis> together with their source langauges' standard libraries. Assuming build !=
build-time dependencies would have a triple of <literal>(foo, foo, host != target, a run-time dependency of the standard library cannot be run
foo)</literal>. In other words, it should take two "rounds" of following at the compiler's build time or run time, but only at the run time of code
build-time dependency edges before one reaches a fixed point where, by the emitted by the compiler.
sliding window principle, the platform triple no longer changes. Indeed,
this happens with cross-compilation, where only rounds of native
dependencies starting with the second necessarily coincide with native
packages.
</para> </para>
<note>
<para>
The depending package's target platform is unconstrained by the sliding
window principle, which makes sense in that one can in principle build
cross compilers targeting arbitrary platforms.
</para>
</note>
<para> <para>
How does this work in practice? Nixpkgs is now structured so that build-time Putting this all together, that means we have dependencies in the form
dependencies are taken from <varname>buildPackages</varname>, whereas "host → target", in at most the following six combinations:
run-time dependencies are taken from the top level attribute set. For <table>
example, <varname>buildPackages.gcc</varname> should be used at build-time, <caption>Possible dependency types</caption>
while <varname>gcc</varname> should be used at run-time. Now, for most of <thead>
Nixpkgs's history, there was no <varname>buildPackages</varname>, and most <tr>
packages have not been refactored to use it explicitly. Instead, one can use <th>Dependency's host platform</th>
the six (<emphasis>gasp</emphasis>) attributes used for specifying <th>Dependency's target platform</th>
dependencies as documented in <xref linkend="ssec-stdenv-dependencies"/>. We </tr>
"splice" together the run-time and build-time package sets with </thead>
<varname>callPackage</varname>, and then <varname>mkDerivation</varname> for <tbody>
each of four attributes pulls the right derivation out. This splicing can be <tr>
skipped when not cross-compiling as the package sets are the same, but is a <td>build</td>
bit slow for cross-compiling. Because of this, a best-of-both-worlds <td>build</td>
solution is in the works with no splicing or explicit access of </tr>
<varname>buildPackages</varname> needed. For now, feel free to use either <tr>
method. <td>build</td>
<td>host</td>
</tr>
<tr>
<td>build</td>
<td>target</td>
</tr>
<tr>
<td>host</td>
<td>host</td>
</tr>
<tr>
<td>host</td>
<td>target</td>
</tr>
<tr>
<td>target</td>
<td>target</td>
</tr>
</tbody>
</table>
</para> </para>
<note> <para>
<para> Some examples will make this table clearer. Suppose there's some package
There is also a "backlink" <varname>targetPackages</varname>, yielding a that is being built with a <literal>(build, host, target)</literal>
package set whose <varname>buildPackages</varname> is the current package platform triple of <literal>(foo, bar, baz)</literal>. If it has a
set. This is a hack, though, to accommodate compilers with lousy build build-time library dependency, that would be a "host → build" dependency
systems. Please do not use this unless you are absolutely sure you are with a triple of <literal>(foo, foo, *)</literal> (the target platform is
packaging such a compiler and there is no other way. irrelevant). If it needs a compiler to be built, that would be a "build →
</para> host" dependency with a triple of <literal>(foo, foo, *)</literal> (the
</note> target platform is irrelevant). That compiler, would be built with another
compiler, also "build → host" dependency, with a triple of <literal>(foo,
foo, foo)</literal>.
</para>
</section> </section>
<section xml:id="sec-cross-cookbook"> <section xml:id="ssec-cross-cookbook">
<title>Cross packaging cookbook</title> <title>Cross packaging cookbook</title>
<para> <para>
@ -311,8 +339,8 @@
should be answered here. Ideally, the information above is exhaustive, so should be answered here. Ideally, the information above is exhaustive, so
this section cannot provide any new information, but it is ludicrous and this section cannot provide any new information, but it is ludicrous and
cruel to expect everyone to spend effort working through the interaction of cruel to expect everyone to spend effort working through the interaction of
many features just to figure out the same answer to the same common problem. many features just to figure out the same answer to the same common
Feel free to add to this list! problem. Feel free to add to this list!
</para> </para>
<qandaset> <qandaset>
@ -434,35 +462,217 @@ nix-build &lt;nixpkgs&gt; --arg crossSystem '{ config = "&lt;arch&gt;-&lt;os&gt;
build plan or package set. A simple "build vs deploy" dichotomy is adequate: build plan or package set. A simple "build vs deploy" dichotomy is adequate:
the sliding window principle described in the previous section shows how to the sliding window principle described in the previous section shows how to
interpolate between the these two "end points" to get the 3 platform triple interpolate between the these two "end points" to get the 3 platform triple
for each bootstrapping stage. That means for any package a given package set, for each bootstrapping stage. That means for any package a given package
even those not bound on the top level but only reachable via dependencies or set, even those not bound on the top level but only reachable via
<varname>buildPackages</varname>, the three platforms will be defined as one dependencies or <varname>buildPackages</varname>, the three platforms will
of <varname>localSystem</varname> or <varname>crossSystem</varname>, with the be defined as one of <varname>localSystem</varname> or
former replacing the latter as one traverses build-time dependencies. A last <varname>crossSystem</varname>, with the former replacing the latter as one
simple difference is that <varname>crossSystem</varname> should be null when traverses build-time dependencies. A last simple difference is that
one doesn't want to cross-compile, while the <varname>*Platform</varname>s <varname>crossSystem</varname> should be null when one doesn't want to
are always non-null. <varname>localSystem</varname> is always non-null. cross-compile, while the <varname>*Platform</varname>s are always non-null.
<varname>localSystem</varname> is always non-null.
</para> </para>
</section> </section>
<!--============================================================--> <!--============================================================-->
<section xml:id="sec-cross-infra"> <section xml:id="sec-cross-infra">
<title>Cross-compilation infrastructure</title> <title>Cross-compilation infrastructure</title>
<para> <section xml:id="ssec-cross-dependency-implementation">
To be written. <title>Implementation of dependencies</title>
</para>
<note>
<para> <para>
If one explores Nixpkgs, they will see derivations with names like The categorizes of dependencies developed in
<literal>gccCross</literal>. Such <literal>*Cross</literal> derivations is a <xref
holdover from before we properly distinguished between the host and target linkend="ssec-cross-dependency-categorization"/> are specified as
platforms—the derivation with "Cross" in the name covered the <literal>build lists of derivations given to <varname>mkDerivation</varname>, as
= host != target</literal> case, while the other covered the <literal>host = documented in <xref linkend="ssec-stdenv-dependencies"/>. In short,
target</literal>, with build platform the same or not based on whether one each list of dependencies for "host → target" of "foo → bar" is called
was using its <literal>.nativeDrv</literal> or <literal>.crossDrv</literal>. <varname>depsFooBar</varname>, with exceptions for backwards
This ugliness will disappear soon. compatibility that <varname>depsBuildHost</varname> is instead called
<varname>nativeBuildInputs</varname> and <varname>depsHostTarget</varname>
is instead called <varname>buildInputs</varname>. Nixpkgs is now structured
so that each <varname>depsFooBar</varname> is automatically taken from
<varname>pkgsFooBar</varname>. (These <varname>pkgsFooBar</varname>s are
quite new, so there is no special case for
<varname>nativeBuildInputs</varname> and <varname>buildInputs</varname>.)
For example, <varname>pkgsBuildHost.gcc</varname> should be used at
build-time, while <varname>pkgsHostTarget.gcc</varname> should be used at
run-time.
</para> </para>
</note>
<para>
Now, for most of Nixpkgs's history, there were no
<varname>pkgsFooBar</varname> attributes, and most packages have not been
refactored to use it explicitly. Prior to those, there were just
<varname>buildPackages</varname>, <varname>pkgs</varname>, and
<varname>targetPackages</varname>. Those are now redefined as aliases to
<varname>pkgsBuildHost</varname>, <varname>pkgsHostTarget</varname>, and
<varname>pkgsTargetTarget</varname>. It is acceptable, even
recommended, to use them for libraries to show that the host platform is
irrelevant.
</para>
<para>
But before that, there was just <varname>pkgs</varname>, even though both
<varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>
existed. [Cross barely worked, and those were implemented with some hacks
on <varname>mkDerivation</varname> to override dependencies.] What this
means is the vast majority of packages do not use any explicit package set
to populate their dependencies, just using whatever
<varname>callPackage</varname> gives them even if they do correctly sort
their dependencies into the multiple lists described above. And indeed,
asking that users both sort their dependencies, <emphasis>and</emphasis>
take them from the right attribute set, is both too onerous and redundant,
so the recommended approach (for now) is to continue just categorizing by
list and not using an explicit package set.
</para>
<para>
To make this work, we "splice" together the six
<varname>pkgsFooBar</varname> package sets and have
<varname>callPackage</varname> actually take its arguments from that. This
is currently implemented in <filename>pkgs/top-level/splice.nix</filename>.
<varname>mkDerivation</varname> then, for each dependency attribute, pulls
the right derivation out from the splice. This splicing can be skipped when
not cross-compiling as the package sets are the same, but still is a bit
slow for cross-compiling. We'd like to do something better, but haven't
come up with anything yet.
</para>
</section>
<section xml:id="ssec-bootstrapping">
<title>Bootstrapping</title>
<para>
Each of the package sets described above come from a single bootstrapping
stage. While <filename>pkgs/top-level/default.nix</filename>, coordinates
the composition of stages at a high level,
<filename>pkgs/top-level/stage.nix</filename> "ties the knot" (creates the
fixed point) of each stage. The package sets are defined per-stage however,
so they can be thought of as edges between stages (the nodes) in a graph.
Compositions like <literal>pkgsBuildTarget.targetPackages</literal> can be
thought of as paths to this graph.
</para>
<para>
While there are many package sets, and thus many edges, the stages can also
be arranged in a linear chain. In other words, many of the edges are
redundant as far as connectivity is concerned. This hinges on the type of
bootstrapping we do. Currently for cross it is:
<orderedlist>
<listitem>
<para>
<literal>(native, native, native)</literal>
</para>
</listitem>
<listitem>
<para>
<literal>(native, native, foreign)</literal>
</para>
</listitem>
<listitem>
<para>
<literal>(native, foreign, foreign)</literal>
</para>
</listitem>
</orderedlist>
In each stage, <varname>pkgsBuildHost</varname> refers the the previous
stage, <varname>pkgsBuildBuild</varname> refers to the one before that, and
<varname>pkgsHostTarget</varname> refers to the current one, and
<varname>pkgsTargetTarget</varname> refers to the next one. When there is
no previous or next stage, they instead refer to the current stage. Note
how all the invariants regarding the mapping between dependency and depending
packages' build host and target platforms are preserved.
<varname>pkgsBuildTarget</varname> and <varname>pkgsHostHost</varname> are
more complex in that the stage fitting the requirements isn't always a
fixed chain of "prevs" and "nexts" away (modulo the "saturating"
self-references at the ends). We just special case each instead. All the primary
edges are implemented is in <filename>pkgs/stdenv/booter.nix</filename>,
and secondarily aliases in <filename>pkgs/top-level/stage.nix</filename>.
</para>
<note>
<para>
Note the native stages are bootstrapped in legacy ways that predate the
current cross implementation. This is why the the bootstrapping stages
leading up to the final stages are ignored inthe previous paragraph.
</para>
</note>
<para>
If one looks at the 3 platform triples, one can see that they overlap such
that one could put them together into a chain like:
<programlisting>
(native, native, native, foreign, foreign)
</programlisting>
If one imagines the saturating self references at the end being replaced
with infinite stages, and then overlays those platform triples, one ends up
with the infinite tuple:
<programlisting>
(native..., native, native, native, foreign, foreign, foreign...)
</programlisting>
On can then imagine any sequence of platforms such that there are bootstrap
stages with their 3 platforms determined by "sliding a window" that is the
3 tuple through the sequence. This was the original model for
bootstrapping. Without a target platform (assume a better world where all
compilers are multi-target and all standard libraries are built in their
own derivation), this is sufficient. Conversely if one wishes to cross
compile "faster", with a "Canadian Cross" bootstraping stage where
<literal>build != host != target</literal>, more bootstrapping stages are
needed since no sliding window providess the pesky
<varname>pkgsBuildTarget</varname> package set since it skips the Canadian
cross stage's "host".
</para>
<note>
<para>
It is much better to refer to <varname>buildPackages</varname> than
<varname>targetPackages</varname>, or more broadly package sets that do
not mention "target". There are three reasons for this.
</para>
<para>
First, it is because bootstrapping stages do not have a unique
<varname>targetPackages</varname>. For example a <literal>(x86-linux,
x86-linux, arm-linux)</literal> and <literal>(x86-linux, x86-linux,
x86-windows)</literal> package set both have a <literal>(x86-linux,
x86-linux, x86-linux)</literal> package set. Because there is no canonical
<varname>targetPackages</varname> for such a native (<literal>build ==
host == target</literal>) package set, we set their
<varname>targetPackages</varname>
</para>
<para>
Second, it is because this is a frequent source of hard-to-follow
"infinite recursions" / cycles. When only package sets that don't mention
target are used, the package set forms a directed acyclic graph. This
means that all cycles that exist are confined to one stage. This means
they are a lot smaller, and easier to follow in the code or a backtrace. It
also means they are present in native and cross builds alike, and so more
likely to be caught by CI and other users.
</para>
<para>
Thirdly, it is because everything target-mentioning only exists to
accommodate compilers with lousy build systems that insist on the compiler
itself and standard library being built together. Of course that is bad
because bigger derivations means longer rebuilds. It is also problematic because
it tends to make the standard libraries less like other libraries than
they could be, complicating code and build systems alike. Because of the
other problems, and because of these innate disadvantages, compilers ought
to be packaged another way where possible.
</para>
</note>
<note>
<para>
If one explores Nixpkgs, they will see derivations with names like
<literal>gccCross</literal>. Such <literal>*Cross</literal> derivations is
a holdover from before we properly distinguished between the host and
target platforms—the derivation with "Cross" in the name covered the
<literal>build = host != target</literal> case, while the other covered
the <literal>host = target</literal>, with build platform the same or not
based on whether one was using its <literal>.nativeDrv</literal> or
<literal>.crossDrv</literal>. This ugliness will disappear soon.
</para>
</note>
</section>
</section> </section>
</chapter> </chapter>

View file

@ -1,8 +1,7 @@
{ pkgs ? (import ./.. { }), nixpkgs ? { }}: { pkgs ? (import ./.. { }), nixpkgs ? { }}:
let let
lib = pkgs.lib; lib = pkgs.lib;
locationsXml = import ./lib-function-locations.nix { inherit pkgs nixpkgs; }; doc-support = import ./doc-support { inherit pkgs nixpkgs; };
functionDocs = import ./lib-function-docs.nix { inherit locationsXml pkgs; };
in pkgs.stdenv.mkDerivation { in pkgs.stdenv.mkDerivation {
name = "nixpkgs-manual"; name = "nixpkgs-manual";
@ -10,30 +9,8 @@ in pkgs.stdenv.mkDerivation {
src = ./.; src = ./.;
# Hacking on these variables? Make sure to close and open
# nix-shell between each test, maybe even:
# $ nix-shell --run "make clean all"
# otherwise they won't reapply :)
HIGHLIGHTJS = pkgs.documentation-highlighter;
XSL = "${pkgs.docbook_xsl_ns}/xml/xsl";
RNG = "${pkgs.docbook5}/xml/rng/docbook/docbook.rng";
XMLFORMAT_CONFIG = ../nixos/doc/xmlformat.conf;
xsltFlags = lib.concatStringsSep " " [
"--param section.autolabel 1"
"--param section.label.includes.component.label 1"
"--stringparam html.stylesheet 'style.css overrides.css highlightjs/mono-blue.css'"
"--stringparam html.script './highlightjs/highlight.pack.js ./highlightjs/loader.js'"
"--param xref.with.number.and.title 1"
"--param toc.section.depth 3"
"--stringparam admon.style ''"
"--stringparam callout.graphics.extension .svg"
];
postPatch = '' postPatch = ''
rm -rf ./functions/library/locations.xml ln -s ${doc-support} ./doc-support/result
ln -s ${locationsXml} ./functions/library/locations.xml
ln -s ${functionDocs} ./functions/library/generated
echo ${lib.version} > .version
''; '';
installPhase = '' installPhase = ''

View file

@ -0,0 +1,45 @@
{ pkgs ? (import ../.. {}), nixpkgs ? { }}:
let
locationsXml = import ./lib-function-locations.nix { inherit pkgs nixpkgs; };
functionDocs = import ./lib-function-docs.nix { inherit locationsXml pkgs; };
version = pkgs.lib.version;
epub-xsl = pkgs.writeText "epub.xsl" ''
<?xml version='1.0'?>
<xsl:stylesheet
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version="1.0">
<xsl:import href="${pkgs.docbook_xsl_ns}/xml/xsl/docbook/epub/docbook.xsl" />
<xsl:import href="${./parameters.xml}"/>
</xsl:stylesheet>
'';
xhtml-xsl = pkgs.writeText "xhtml.xsl" ''
<?xml version='1.0'?>
<xsl:stylesheet
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version="1.0">
<xsl:import href="${pkgs.docbook_xsl_ns}/xml/xsl/docbook/xhtml/docbook.xsl" />
<xsl:import href="${./parameters.xml}"/>
</xsl:stylesheet>
'';
in pkgs.runCommand "doc-support" {}
''
mkdir result
(
cd result
ln -s ${locationsXml} ./function-locations.xml
ln -s ${functionDocs} ./function-docs
ln -s ${pkgs.docbook5}/xml/rng/docbook/docbook.rng ./docbook.rng
ln -s ${pkgs.docbook_xsl_ns}/xml/xsl ./xsl
ln -s ${epub-xsl} ./epub.xsl
ln -s ${xhtml-xsl} ./xhtml.xsl
ln -s ${../../nixos/doc/xmlformat.conf} ./xmlformat.conf
ln -s ${pkgs.documentation-highlighter} ./highlightjs
echo -n "${version}" > ./version
)
mv result $out
''

View file

@ -6,7 +6,7 @@
with pkgs; stdenv.mkDerivation { with pkgs; stdenv.mkDerivation {
name = "nixpkgs-lib-docs"; name = "nixpkgs-lib-docs";
src = ./../lib; src = ./../../lib;
buildInputs = [ nixdoc ]; buildInputs = [ nixdoc ];
installPhase = '' installPhase = ''

View file

@ -0,0 +1,14 @@
<?xml version='1.0'?>
<xsl:stylesheet
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
version="1.0">
<xsl:param name="section.autolabel" select="1" />
<xsl:param name="section.label.includes.component.label" select="1" />
<xsl:param name="html.stylesheet" select="'style.css overrides.css highlightjs/mono-blue.css'" />
<xsl:param name="html.script" select="'./highlightjs/highlight.pack.js ./highlightjs/loader.js'" />
<xsl:param name="xref.with.number.and.title" select="1" />
<xsl:param name="use.id.as.filename" select="1" />
<xsl:param name="toc.section.depth" select="3" />
<xsl:param name="admon.style" select="''" />
<xsl:param name="callout.graphics.extension" select="'.svg'" />
</xsl:stylesheet>

View file

@ -16,6 +16,8 @@
<xi:include href="functions/fhs-environments.xml" /> <xi:include href="functions/fhs-environments.xml" />
<xi:include href="functions/shell.xml" /> <xi:include href="functions/shell.xml" />
<xi:include href="functions/dockertools.xml" /> <xi:include href="functions/dockertools.xml" />
<xi:include href="functions/snaptools.xml" />
<xi:include href="functions/appimagetools.xml" />
<xi:include href="functions/prefer-remote-fetch.xml" /> <xi:include href="functions/prefer-remote-fetch.xml" />
<xi:include href="functions/nix-gitignore.xml" /> <xi:include href="functions/nix-gitignore.xml" />
</chapter> </chapter>

View file

@ -0,0 +1,118 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-appimageTools">
<title>pkgs.appimageTools</title>
<para>
<varname>pkgs.appimageTools</varname> is a set of functions for extracting
and wrapping <link xlink:href="https://appimage.org/">AppImage</link> files.
They are meant to be used if traditional packaging from source is infeasible,
or it would take too long. To quickly run an AppImage file,
<literal>pkgs.appimage-run</literal> can be used as well.
</para>
<warning>
<para>
The <varname>appimageTools</varname> API is unstable and may be subject to
backwards-incompatible changes in the future.
</para>
</warning>
<section xml:id="ssec-pkgs-appimageTools-formats">
<title>AppImage formats</title>
<para>
There are different formats for AppImages, see
<link xlink:href="https://github.com/AppImage/AppImageSpec/blob/74ad9ca2f94bf864a4a0dac1f369dd4f00bd1c28/draft.md#image-format">the
specification</link> for details.
</para>
<itemizedlist>
<listitem>
<para>
Type 1 images are ISO 9660 files that are also ELF executables.
</para>
</listitem>
<listitem>
<para>
Type 2 images are ELF executables with an appended filesystem.
</para>
</listitem>
</itemizedlist>
<para>
They can be told apart with <command>file -k</command>:
</para>
<screen>
<prompt>$ </prompt>file -k type1.AppImage
type1.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) ISO 9660 CD-ROM filesystem data 'AppImage' (Lepton 3.x), scale 0-0,
spot sensor temperature 0.000000, unit celsius, color scheme 0, calibration: offset 0.000000, slope 0.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=d629f6099d2344ad82818172add1d38c5e11bc6d, stripped\012- data
<prompt>$ </prompt>file -k type2.AppImage
type2.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) (Lepton 3.x), scale 232-60668, spot sensor temperature -4.187500, color scheme 15, show scale bar, calibration: offset -0.000000, slope 0.000000 (Lepton 2.x), scale 4111-45000, spot sensor temperature 412442.250000, color scheme 3, minimum point enabled, calibration: offset -75402534979642766821519867692934234112.000000, slope 5815371847733706829839455140374904832.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=79dcc4e55a61c293c5e19edbd8d65b202842579f, stripped\012- data
</screen>
<para>
Note how the type 1 AppImage is described as an <literal>ISO 9660 CD-ROM
filesystem</literal>, and the type 2 AppImage is not.
</para>
</section>
<section xml:id="ssec-pkgs-appimageTools-wrapping">
<title>Wrapping</title>
<para>
Depending on the type of AppImage you're wrapping, you'll have to use
<varname>wrapType1</varname> or <varname>wrapType2</varname>.
</para>
<programlisting>
appimageTools.wrapType2 { # or wrapType1
name = "patchwork"; <co xml:id='ex-appimageTools-wrapping-1' />
src = fetchurl { <co xml:id='ex-appimageTools-wrapping-2' />
url = https://github.com/ssbc/patchwork/releases/download/v3.11.4/Patchwork-3.11.4-linux-x86_64.AppImage;
sha256 = "1blsprpkvm0ws9b96gb36f0rbf8f5jgmw4x6dsb1kswr4ysf591s";
};
extraPkgs = pkgs: with pkgs; [ ]; <co xml:id='ex-appimageTools-wrapping-3' />
}</programlisting>
<calloutlist>
<callout arearefs='ex-appimageTools-wrapping-1'>
<para>
<varname>name</varname> specifies the name of the resulting image.
</para>
</callout>
<callout arearefs='ex-appimageTools-wrapping-2'>
<para>
<varname>src</varname> specifies the AppImage file to extract.
</para>
</callout>
<callout arearefs='ex-appimageTools-wrapping-2'>
<para>
<varname>extraPkgs</varname> allows you to pass a function to include
additional packages inside the FHS environment your AppImage is going to
run in. There are a few ways to learn which dependencies an application
needs:
<itemizedlist>
<listitem>
<para>
Looking through the extracted AppImage files, reading its scripts and
running <command>patchelf</command> and <command>ldd</command> on its
executables. This can also be done in <command>appimage-run</command>,
by setting <command>APPIMAGE_DEBUG_EXEC=bash</command>.
</para>
</listitem>
<listitem>
<para>
Running <command>strace -vfefile</command> on the wrapped executable,
looking for libraries that can't be found.
</para>
</listitem>
</itemizedlist>
</para>
</callout>
</calloutlist>
</section>
</section>

View file

@ -24,9 +24,9 @@
<para> <para>
This function is analogous to the <command>docker build</command> command, This function is analogous to the <command>docker build</command> command,
in that it can be used to build a Docker-compatible repository tarball containing in that it can be used to build a Docker-compatible repository tarball
a single image with one or multiple layers. As such, the result is suitable containing a single image with one or multiple layers. As such, the result
for being loaded in Docker with <command>docker load</command>. is suitable for being loaded in Docker with <command>docker load</command>.
</para> </para>
<para> <para>
@ -47,7 +47,7 @@ buildImage {
contents = pkgs.redis; <co xml:id='ex-dockerTools-buildImage-6' /> contents = pkgs.redis; <co xml:id='ex-dockerTools-buildImage-6' />
runAsRoot = '' <co xml:id='ex-dockerTools-buildImage-runAsRoot' /> runAsRoot = '' <co xml:id='ex-dockerTools-buildImage-runAsRoot' />
#!${stdenv.shell} #!${pkgs.runtimeShell}
mkdir -p /data mkdir -p /data
''; '';
@ -190,8 +190,8 @@ buildImage {
By default <function>buildImage</function> will use a static date of one By default <function>buildImage</function> will use a static date of one
second past the UNIX Epoch. This allows <function>buildImage</function> to second past the UNIX Epoch. This allows <function>buildImage</function> to
produce binary reproducible images. When listing images with produce binary reproducible images. When listing images with
<command>docker images</command>, the newly created images will be <command>docker images</command>, the newly created images will be listed
listed like this: like this:
</para> </para>
<screen><![CDATA[ <screen><![CDATA[
$ docker images $ docker images
@ -312,7 +312,23 @@ hello latest de2bf4786de6 About a minute ago 25.2MB
Maximum number of layers to create. Maximum number of layers to create.
</para> </para>
<para> <para>
<emphasis>Default:</emphasis> <literal>24</literal> <emphasis>Default:</emphasis> <literal>100</literal>
</para>
<para>
<emphasis>Maximum:</emphasis> <literal>125</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>extraCommands</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Shell commands to run while building the final layer, without access
to most of the layer contents. Changes to this layer are "on top"
of all the other layers, so can create additional directories
and files.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -402,9 +418,9 @@ pkgs.dockerTools.buildLayeredImage {
<para> <para>
This function is analogous to the <command>docker pull</command> command, in This function is analogous to the <command>docker pull</command> command, in
that it can be used to pull a Docker image from a Docker registry. By default that it can be used to pull a Docker image from a Docker registry. By
<link xlink:href="https://hub.docker.com/">Docker Hub</link> is used to pull default <link xlink:href="https://hub.docker.com/">Docker Hub</link> is used
images. to pull images.
</para> </para>
<para> <para>
@ -417,10 +433,11 @@ pkgs.dockerTools.buildLayeredImage {
pullImage { pullImage {
imageName = "nixos/nix"; <co xml:id='ex-dockerTools-pullImage-1' /> imageName = "nixos/nix"; <co xml:id='ex-dockerTools-pullImage-1' />
imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; <co xml:id='ex-dockerTools-pullImage-2' /> imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; <co xml:id='ex-dockerTools-pullImage-2' />
finalImageTag = "1.11"; <co xml:id='ex-dockerTools-pullImage-3' /> finalImageName = "nix"; <co xml:id='ex-dockerTools-pullImage-3' />
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; <co xml:id='ex-dockerTools-pullImage-4' /> finalImageTag = "1.11"; <co xml:id='ex-dockerTools-pullImage-4' />
os = "linux"; <co xml:id='ex-dockerTools-pullImage-5' /> sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; <co xml:id='ex-dockerTools-pullImage-5' />
arch = "x86_64"; <co xml:id='ex-dockerTools-pullImage-6' /> os = "linux"; <co xml:id='ex-dockerTools-pullImage-6' />
arch = "x86_64"; <co xml:id='ex-dockerTools-pullImage-7' />
} }
</programlisting> </programlisting>
</example> </example>
@ -436,21 +453,18 @@ pullImage {
<callout arearefs='ex-dockerTools-pullImage-2'> <callout arearefs='ex-dockerTools-pullImage-2'>
<para> <para>
<varname>imageDigest</varname> specifies the digest of the image to be <varname>imageDigest</varname> specifies the digest of the image to be
downloaded. Skopeo can be used to get the digest of an image, with its downloaded. This argument is required.
<varname>inspect</varname> subcommand. Since a given
<varname>imageName</varname> may transparently refer to a manifest list of
images which support multiple architectures and/or operating systems,
supply the `--override-os` and `--override-arch` arguments to specify
exactly which image you want. By default it will match the OS and
architecture of the host the command is run on.
<programlisting>
$ nix-shell --packages skopeo jq --command "skopeo --override-os linux --override-arch x86_64 inspect docker://docker.io/nixos/nix:1.11 | jq -r '.Digest'"
sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b
</programlisting>
This argument is required.
</para> </para>
</callout> </callout>
<callout arearefs='ex-dockerTools-pullImage-3'> <callout arearefs='ex-dockerTools-pullImage-3'>
<para>
<varname>finalImageName</varname>, if specified, this is the name of the
image to be created. Note it is never used to fetch the image since we
prefer to rely on the immutable digest ID. By default it's equal to
<varname>imageName</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-4'>
<para> <para>
<varname>finalImageTag</varname>, if specified, this is the tag of the <varname>finalImageTag</varname>, if specified, this is the tag of the
image to be created. Note it is never used to fetch the image since we image to be created. Note it is never used to fetch the image since we
@ -458,25 +472,53 @@ sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b
<literal>latest</literal>. <literal>latest</literal>.
</para> </para>
</callout> </callout>
<callout arearefs='ex-dockerTools-pullImage-4'> <callout arearefs='ex-dockerTools-pullImage-5'>
<para> <para>
<varname>sha256</varname> is the checksum of the whole fetched image. This <varname>sha256</varname> is the checksum of the whole fetched image. This
argument is required. argument is required.
</para> </para>
</callout> </callout>
<callout arearefs='ex-dockerTools-pullImage-5'> <callout arearefs='ex-dockerTools-pullImage-6'>
<para> <para>
<varname>os</varname>, if specified, is the operating system of the <varname>os</varname>, if specified, is the operating system of the
fetched image. By default it's <literal>linux</literal>. fetched image. By default it's <literal>linux</literal>.
</para> </para>
</callout> </callout>
<callout arearefs='ex-dockerTools-pullImage-6'> <callout arearefs='ex-dockerTools-pullImage-7'>
<para> <para>
<varname>arch</varname>, if specified, is the cpu architecture of the <varname>arch</varname>, if specified, is the cpu architecture of the
fetched image. By default it's <literal>x86_64</literal>. fetched image. By default it's <literal>x86_64</literal>.
</para> </para>
</callout> </callout>
</calloutlist> </calloutlist>
<para>
<literal>nix-prefetch-docker</literal> command can be used to get required
image parameters:
<screen>
<prompt>$ </prompt>nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
</screen>
Since a given <varname>imageName</varname> may transparently refer to a
manifest list of images which support multiple architectures and/or
operating systems, you can supply the <option>--os</option> and
<option>--arch</option> arguments to specify exactly which image you want.
By default it will match the OS and architecture of the host the command is
run on.
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
</screen>
Desired image name and tag can be set using
<option>--final-image-name</option> and <option>--final-image-tag</option>
arguments:
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
</screen>
</para>
</section> </section>
<section xml:id="ssec-pkgs-dockerTools-exportImage"> <section xml:id="ssec-pkgs-dockerTools-exportImage">
@ -484,10 +526,10 @@ sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b
<para> <para>
This function is analogous to the <command>docker export</command> command, This function is analogous to the <command>docker export</command> command,
in that it can be used to flatten a Docker image that contains multiple layers. It in that it can be used to flatten a Docker image that contains multiple
is in fact the result of the merge of all the layers of the image. As such, layers. It is in fact the result of the merge of all the layers of the
the result is suitable for being imported in Docker with <command>docker image. As such, the result is suitable for being imported in Docker with
import</command>. <command>docker import</command>.
</para> </para>
<note> <note>
@ -511,7 +553,7 @@ exportImage {
name = someLayeredImage.name; name = someLayeredImage.name;
} }
</programlisting> </programlisting>
</example> </example>
<para> <para>
@ -544,7 +586,7 @@ buildImage {
name = "shadow-basic"; name = "shadow-basic";
runAsRoot = '' runAsRoot = ''
#!${stdenv.shell} #!${pkgs.runtimeShell}
${shadowSetup} ${shadowSetup}
groupadd -r redis groupadd -r redis
useradd -r -g redis redis useradd -r -g redis redis

View file

@ -5,24 +5,21 @@
<title>Fetcher functions</title> <title>Fetcher functions</title>
<para> <para>
When using Nix, you will frequently need to download source code When using Nix, you will frequently need to download source code and other
and other files from the internet. Nixpkgs comes with a few helper files from the internet. Nixpkgs comes with a few helper functions that allow
functions that allow you to fetch fixed-output derivations in a you to fetch fixed-output derivations in a structured way.
structured way.
</para> </para>
<para> <para>
The two fetcher primitives are <function>fetchurl</function> and The two fetcher primitives are <function>fetchurl</function> and
<function>fetchzip</function>. Both of these have two required <function>fetchzip</function>. Both of these have two required arguments, a
arguments, a URL and a hash. The hash is typically URL and a hash. The hash is typically <literal>sha256</literal>, although
<literal>sha256</literal>, although many more hash algorithms are many more hash algorithms are supported. Nixpkgs contributors are currently
supported. Nixpkgs contributors are currently recommended to use recommended to use <literal>sha256</literal>. This hash will be used by Nix
<literal>sha256</literal>. This hash will be used by Nix to to identify your source. A typical usage of fetchurl is provided below.
identify your source. A typical usage of fetchurl is provided
below.
</para> </para>
<programlisting><![CDATA[ <programlisting><![CDATA[
{ stdenv, fetchurl }: { stdenv, fetchurl }:
stdenv.mkDerivation { stdenv.mkDerivation {
@ -35,172 +32,163 @@ stdenv.mkDerivation {
]]></programlisting> ]]></programlisting>
<para> <para>
The main difference between <function>fetchurl</function> and The main difference between <function>fetchurl</function> and
<function>fetchzip</function> is in how they store the contents. <function>fetchzip</function> is in how they store the contents.
<function>fetchurl</function> will store the unaltered contents of <function>fetchurl</function> will store the unaltered contents of the URL
the URL within the Nix store. <function>fetchzip</function> on the within the Nix store. <function>fetchzip</function> on the other hand will
other hand will decompress the archive for you, making files and decompress the archive for you, making files and directories directly
directories directly accessible in the future. accessible in the future. <function>fetchzip</function> can only be used with
<function>fetchzip</function> can only be used with archives. archives. Despite the name, <function>fetchzip</function> is not limited to
Despite the name, <function>fetchzip</function> is not limited to .zip files and can also be used with any tarball.
.zip files and can also be used with any tarball.
</para> </para>
<para> <para>
<function>fetchpatch</function> works very similarly to <function>fetchpatch</function> works very similarly to
<function>fetchurl</function> with the same arguments expected. It <function>fetchurl</function> with the same arguments expected. It expects
expects patch files as a source and and performs normalization on patch files as a source and and performs normalization on them before
them before computing the checksum. For example it will remove computing the checksum. For example it will remove comments or other unstable
comments or other unstable parts that are sometimes added by parts that are sometimes added by version control systems and can change over
version control systems and can change over time. time.
</para> </para>
<para> <para>
Other fetcher functions allow you to add source code directly from Other fetcher functions allow you to add source code directly from a VCS such
a VCS such as subversion or git. These are mostly straightforward as subversion or git. These are mostly straightforward names based on the
names based on the name of the command used with the VCS system. name of the command used with the VCS system. Because they give you a working
Because they give you a working repository, they act most like repository, they act most like <function>fetchzip</function>.
<function>fetchzip</function>.
</para> </para>
<variablelist> <variablelist>
<varlistentry> <varlistentry>
<term> <term>
<literal>fetchsvn</literal> <literal>fetchsvn</literal>
</term> </term>
<listitem> <listitem>
<para> <para>
Used with Subversion. Expects <literal>url</literal> to a Used with Subversion. Expects <literal>url</literal> to a Subversion
Subversion directory, <literal>rev</literal>, and directory, <literal>rev</literal>, and <literal>sha256</literal>.
<literal>sha256</literal>. </para>
</para> </listitem>
</listitem> </varlistentry>
</varlistentry> <varlistentry>
<varlistentry> <term>
<term> <literal>fetchgit</literal>
<literal>fetchgit</literal> </term>
</term> <listitem>
<listitem> <para>
<para> Used with Git. Expects <literal>url</literal> to a Git repo,
Used with Git. Expects <literal>url</literal> to a Git repo, <literal>rev</literal>, and <literal>sha256</literal>.
<literal>rev</literal>, and <literal>sha256</literal>. <literal>rev</literal> in this case can be full the git commit id (SHA1
<literal>rev</literal> in this case can be full the git commit hash) or a tag name like <literal>refs/tags/v1.0</literal>.
id (SHA1 hash) or a tag name like </para>
<literal>refs/tags/v1.0</literal>. </listitem>
</para> </varlistentry>
</listitem> <varlistentry>
</varlistentry> <term>
<varlistentry> <literal>fetchfossil</literal>
<term> </term>
<literal>fetchfossil</literal> <listitem>
</term> <para>
<listitem> Used with Fossil. Expects <literal>url</literal> to a Fossil archive,
<para> <literal>rev</literal>, and <literal>sha256</literal>.
Used with Fossil. Expects <literal>url</literal> to a Fossil </para>
archive, <literal>rev</literal>, and <literal>sha256</literal>. </listitem>
</para> </varlistentry>
</listitem> <varlistentry>
</varlistentry> <term>
<varlistentry> <literal>fetchcvs</literal>
<term> </term>
<literal>fetchcvs</literal> <listitem>
</term> <para>
<listitem> Used with CVS. Expects <literal>cvsRoot</literal>, <literal>tag</literal>,
<para> and <literal>sha256</literal>.
Used with CVS. Expects <literal>cvsRoot</literal>, </para>
<literal>tag</literal>, and <literal>sha256</literal>. </listitem>
</para> </varlistentry>
</listitem> <varlistentry>
</varlistentry> <term>
<varlistentry> <literal>fetchhg</literal>
<term> </term>
<literal>fetchhg</literal> <listitem>
</term> <para>
<listitem> Used with Mercurial. Expects <literal>url</literal>,
<para> <literal>rev</literal>, and <literal>sha256</literal>.
Used with Mercurial. Expects <literal>url</literal>, </para>
<literal>rev</literal>, and <literal>sha256</literal>. </listitem>
</para> </varlistentry>
</listitem>
</varlistentry>
</variablelist> </variablelist>
<para> <para>
A number of fetcher functions wrap part of A number of fetcher functions wrap part of <function>fetchurl</function> and
<function>fetchurl</function> and <function>fetchzip</function>. <function>fetchzip</function>. They are mainly convenience functions intended
They are mainly convenience functions intended for commonly used for commonly used destinations of source code in Nixpkgs. These wrapper
destinations of source code in Nixpkgs. These wrapper fetchers are fetchers are listed below.
listed below.
</para> </para>
<variablelist> <variablelist>
<varlistentry> <varlistentry>
<term> <term>
<literal>fetchFromGitHub</literal> <literal>fetchFromGitHub</literal>
</term> </term>
<listitem> <listitem>
<para> <para>
<function>fetchFromGitHub</function> expects four arguments. <function>fetchFromGitHub</function> expects four arguments.
<literal>owner</literal> is a string corresponding to the <literal>owner</literal> is a string corresponding to the GitHub user or
GitHub user or organization that controls this repository. organization that controls this repository. <literal>repo</literal>
<literal>repo</literal> corresponds to the name of the corresponds to the name of the software repository. These are located at
software repository. These are located at the top of every the top of every GitHub HTML page as
GitHub HTML page as <literal>owner</literal>/<literal>repo</literal>. <literal>rev</literal>
<literal>owner</literal>/<literal>repo</literal>. corresponds to the Git commit hash or tag (e.g <literal>v1.0</literal>)
<literal>rev</literal> corresponds to the Git commit hash or that will be downloaded from Git. Finally, <literal>sha256</literal>
tag (e.g <literal>v1.0</literal>) that will be downloaded from corresponds to the hash of the extracted directory. Again, other hash
Git. Finally, <literal>sha256</literal> corresponds to the algorithms are also available but <literal>sha256</literal> is currently
hash of the extracted directory. Again, other hash algorithms preferred.
are also available but <literal>sha256</literal> is currently </para>
preferred. </listitem>
</para> </varlistentry>
</listitem> <varlistentry>
</varlistentry> <term>
<varlistentry> <literal>fetchFromGitLab</literal>
<term> </term>
<literal>fetchFromGitLab</literal> <listitem>
</term> <para>
<listitem> This is used with GitLab repositories. The arguments expected are very
<para> similar to fetchFromGitHub above.
This is used with GitLab repositories. The arguments expected </para>
are very similar to fetchFromGitHub above. </listitem>
</para> </varlistentry>
</listitem> <varlistentry>
</varlistentry> <term>
<varlistentry> <literal>fetchFromBitbucket</literal>
<term> </term>
<literal>fetchFromBitbucket</literal> <listitem>
</term> <para>
<listitem> This is used with BitBucket repositories. The arguments expected are very
<para> similar to fetchFromGitHub above.
This is used with BitBucket repositories. The arguments expected </para>
are very similar to fetchFromGitHub above. </listitem>
</para> </varlistentry>
</listitem> <varlistentry>
</varlistentry> <term>
<varlistentry> <literal>fetchFromSavannah</literal>
<term> </term>
<literal>fetchFromSavannah</literal> <listitem>
</term> <para>
<listitem> This is used with Savannah repositories. The arguments expected are very
<para> similar to fetchFromGitHub above.
This is used with Savannah repositories. The arguments expected </para>
are very similar to fetchFromGitHub above. </listitem>
</para> </varlistentry>
</listitem> <varlistentry>
</varlistentry> <term>
<varlistentry> <literal>fetchFromRepoOrCz</literal>
<term> </term>
<literal>fetchFromRepoOrCz</literal> <listitem>
</term> <para>
<listitem> This is used with repo.or.cz repositories. The arguments expected are very
<para> similar to fetchFromGitHub above.
This is used with repo.or.cz repositories. The arguments </para>
expected are very similar to fetchFromGitHub above. </listitem>
</para> </varlistentry>
</listitem>
</varlistentry>
</variablelist> </variablelist>
</section> </section>

View file

@ -13,12 +13,17 @@
<xi:include href="./library/attrsets.xml" /> <xi:include href="./library/attrsets.xml" />
<!-- These docs are generated via nixdoc. To add another generated <!-- These docs are generated via nixdoc. To add another generated
library function file to this list, the file library function file to this list, the file
`lib-function-docs.nix` must also be updated. --> `lib-function-docs.nix` must also be updated. -->
<xi:include href="./library/generated/strings.xml" /> <xi:include href="./library/generated/strings.xml" />
<xi:include href="./library/generated/trivial.xml" /> <xi:include href="./library/generated/trivial.xml" />
<xi:include href="./library/generated/lists.xml" /> <xi:include href="./library/generated/lists.xml" />
<xi:include href="./library/generated/debug.xml" /> <xi:include href="./library/generated/debug.xml" />
<xi:include href="./library/generated/options.xml" /> <xi:include href="./library/generated/options.xml" />
</section> </section>

View file

@ -14,15 +14,15 @@
<title>Usage</title> <title>Usage</title>
<para> <para>
<literal>pkgs.nix-gitignore</literal> exports a number of functions, but <literal>pkgs.nix-gitignore</literal> exports a number of functions, but
you'll most likely need either <literal>gitignoreSource</literal> or you'll most likely need either <literal>gitignoreSource</literal> or
<literal>gitignoreSourcePure</literal>. As their first argument, they both <literal>gitignoreSourcePure</literal>. As their first argument, they both
accept either 1. a file with gitignore lines or 2. a string accept either 1. a file with gitignore lines or 2. a string with gitignore
with gitignore lines, or 3. a list of either of the two. They will be lines, or 3. a list of either of the two. They will be concatenated into a
concatenated into a single big string. single big string.
</para> </para>
<programlisting><![CDATA[ <programlisting><![CDATA[
{ pkgs ? import <nixpkgs> {} }: { pkgs ? import <nixpkgs> {} }:
nix-gitignore.gitignoreSource [] ./source nix-gitignore.gitignoreSource [] ./source
@ -40,24 +40,29 @@
]]></programlisting> ]]></programlisting>
<para> <para>
These functions are derived from the <literal>Filter</literal> functions These functions are derived from the <literal>Filter</literal> functions by
by setting the first filter argument to <literal>(_: _: true)</literal>: setting the first filter argument to <literal>(_: _: true)</literal>:
</para> </para>
<programlisting><![CDATA[ <programlisting><![CDATA[
gitignoreSourcePure = gitignoreFilterSourcePure (_: _: true); gitignoreSourcePure = gitignoreFilterSourcePure (_: _: true);
gitignoreSource = gitignoreFilterSource (_: _: true); gitignoreSource = gitignoreFilterSource (_: _: true);
]]></programlisting> ]]></programlisting>
<para> <para>
Those filter functions accept the same arguments the <literal>builtins.filterSource</literal> function would pass to its filters, thus <literal>fn: gitignoreFilterSourcePure fn ""</literal> should be extensionally equivalent to <literal>filterSource</literal>. The file is blacklisted iff it's blacklisted by either your filter or the gitignoreFilter. Those filter functions accept the same arguments the
<literal>builtins.filterSource</literal> function would pass to its filters,
thus <literal>fn: gitignoreFilterSourcePure fn ""</literal> should be
extensionally equivalent to <literal>filterSource</literal>. The file is
blacklisted iff it's blacklisted by either your filter or the
gitignoreFilter.
</para> </para>
<para> <para>
If you want to make your own filter from scratch, you may use If you want to make your own filter from scratch, you may use
</para> </para>
<programlisting><![CDATA[ <programlisting><![CDATA[
gitignoreFilter = ign: root: filterPattern (gitignoreToPatterns ign) root; gitignoreFilter = ign: root: filterPattern (gitignoreToPatterns ign) root;
]]></programlisting> ]]></programlisting>
</section> </section>
@ -66,10 +71,11 @@ gitignoreFilter = ign: root: filterPattern (gitignoreToPatterns ign) root;
<title>gitignore files in subdirectories</title> <title>gitignore files in subdirectories</title>
<para> <para>
If you wish to use a filter that would search for .gitignore files in subdirectories, just like git does by default, use this function: If you wish to use a filter that would search for .gitignore files in
</para> subdirectories, just like git does by default, use this function:
</para>
<programlisting><![CDATA[ <programlisting><![CDATA[
gitignoreFilterRecursiveSource = filter: patterns: root: gitignoreFilterRecursiveSource = filter: patterns: root:
# OR # OR
gitignoreRecursiveSource = gitignoreFilterSourcePure (_: _: true); gitignoreRecursiveSource = gitignoreFilterSourcePure (_: _: true);

View file

@ -7,21 +7,19 @@
<para> <para>
<function>prefer-remote-fetch</function> is an overlay that download sources <function>prefer-remote-fetch</function> is an overlay that download sources
on remote builder. This is useful when the evaluating machine has a slow on remote builder. This is useful when the evaluating machine has a slow
upload while the builder can fetch faster directly from the source. upload while the builder can fetch faster directly from the source. To use
To use it, put the following snippet as a new overlay: it, put the following snippet as a new overlay:
<programlisting> <programlisting>
self: super: self: super:
(super.prefer-remote-fetch self super) (super.prefer-remote-fetch self super)
</programlisting> </programlisting>
A full configuration example for that sets the overlay up for your own
A full configuration example for that sets the overlay up for your own account, account, could look like this
could look like this <screen>
<prompt>$ </prompt>mkdir ~/.config/nixpkgs/overlays/
<programlisting> <prompt>$ </prompt>cat &gt; ~/.config/nixpkgs/overlays/prefer-remote-fetch.nix &lt;&lt;EOF
$ mkdir ~/.config/nixpkgs/overlays/ self: super: super.prefer-remote-fetch self super
$ cat &gt; ~/.config/nixpkgs/overlays/prefer-remote-fetch.nix &lt;&lt;EOF EOF
self: super: super.prefer-remote-fetch self super </screen>
EOF
</programlisting>
</para> </para>
</section> </section>

View file

@ -0,0 +1,28 @@
let
inherit (import <nixpkgs> { }) snapTools firefox;
in snapTools.makeSnap {
meta = {
name = "nix-example-firefox";
summary = firefox.meta.description;
architectures = [ "amd64" ];
apps.nix-example-firefox = {
command = "${firefox}/bin/firefox";
plugs = [
"pulseaudio"
"camera"
"browser-support"
"avahi-observe"
"cups-control"
"desktop"
"desktop-legacy"
"gsettings"
"home"
"network"
"mount-observe"
"removable-media"
"x11"
];
};
confinement = "strict";
};
}

View file

@ -0,0 +1,12 @@
let
inherit (import <nixpkgs> { }) snapTools hello;
in snapTools.makeSnap {
meta = {
name = "hello";
summary = hello.meta.description;
description = hello.meta.longDescription;
architectures = [ "amd64" ];
confinement = "strict";
apps.hello.command = "${hello}/bin/hello";
};
}

View file

@ -0,0 +1,74 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-snapTools">
<title>pkgs.snapTools</title>
<para>
<varname>pkgs.snapTools</varname> is a set of functions for creating
Snapcraft images. Snap and Snapcraft is not used to perform these operations.
</para>
<section xml:id="ssec-pkgs-snapTools-makeSnap-signature">
<title>The makeSnap Function</title>
<para>
<function>makeSnap</function> takes a single named argument,
<parameter>meta</parameter>. This argument mirrors
<link xlink:href="https://docs.snapcraft.io/snap-format">the upstream
<filename>snap.yaml</filename> format</link> exactly.
</para>
<para>
The <parameter>base</parameter> should not be be specified, as
<function>makeSnap</function> will force set it.
</para>
<para>
Currently, <function>makeSnap</function> does not support creating GUI
stubs.
</para>
</section>
<section xml:id="ssec-pkgs-snapTools-build-a-snap-hello">
<title>Build a Hello World Snap</title>
<example xml:id="ex-snapTools-buildSnap-hello">
<title>Making a Hello World Snap</title>
<para>
The following expression packages GNU Hello as a Snapcraft snap.
</para>
<programlisting><xi:include href="./snap/example-hello.nix" parse="text" /></programlisting>
<para>
<command>nix-build</command> this expression and install it with
<command>snap install ./result --dangerous</command>.
<command>hello</command> will now be the Snapcraft version of the package.
</para>
</example>
</section>
<section xml:id="ssec-pkgs-snapTools-build-a-snap-firefox">
<title>Build a Hello World Snap</title>
<example xml:id="ex-snapTools-buildSnap-firefox">
<title>Making a Graphical Snap</title>
<para>
Graphical programs require many more integrations with the host. This
example uses Firefox as an example, because it is one of the most
complicated programs we could package.
</para>
<programlisting><xi:include href="./snap/example-firefox.nix" parse="text" /></programlisting>
<para>
<command>nix-build</command> this expression and install it with
<command>snap install ./result --dangerous</command>.
<command>nix-example-firefox</command> will now be the Snapcraft version of
the Firefox package.
</para>
<para>
The specific meaning behind plugs can be looked up in the
<link xlink:href="https://docs.snapcraft.io/supported-interfaces">Snapcraft
interface documentation</link>.
</para>
</example>
</section>
</section>

View file

@ -5,12 +5,11 @@
<title>Trivial builders</title> <title>Trivial builders</title>
<para> <para>
Nixpkgs provides a couple of functions that help with building Nixpkgs provides a couple of functions that help with building derivations.
derivations. The most important one, The most important one, <function>stdenv.mkDerivation</function>, has already
<function>stdenv.mkDerivation</function>, has already been been documented above. The following functions wrap
documented above. The following functions wrap <function>stdenv.mkDerivation</function>, making it easier to use in certain
<function>stdenv.mkDerivation</function>, making it easier to use cases.
in certain cases.
</para> </para>
<variablelist> <variablelist>
@ -19,45 +18,42 @@
<literal>runCommand</literal> <literal>runCommand</literal>
</term> </term>
<listitem> <listitem>
<para> <para>
This takes three arguments, <literal>name</literal>, This takes three arguments, <literal>name</literal>,
<literal>env</literal>, and <literal>buildCommand</literal>. <literal>env</literal>, and <literal>buildCommand</literal>.
<literal>name</literal> is just the name that Nix will append <literal>name</literal> is just the name that Nix will append to the store
to the store path in the same way that path in the same way that <literal>stdenv.mkDerivation</literal> uses its
<literal>stdenv.mkDerivation</literal> uses its <literal>name</literal> attribute. <literal>env</literal> is an attribute
<literal>name</literal> attribute. <literal>env</literal> is an set specifying environment variables that will be set for this derivation.
attribute set specifying environment variables that will be set These attributes are then passed to the wrapped
for this derivation. These attributes are then passed to the <literal>stdenv.mkDerivation</literal>. <literal>buildCommand</literal>
wrapped <literal>stdenv.mkDerivation</literal>. specifies the commands that will be run to create this derivation. Note
<literal>buildCommand</literal> specifies the commands that that you will need to create <literal>$out</literal> for Nix to register
will be run to create this derivation. Note that you will need the command as successful.
to create <literal>$out</literal> for Nix to register the </para>
command as successful. <para>
</para> An example of using <literal>runCommand</literal> is provided below.
<para> </para>
An example of using <literal>runCommand</literal> is provided <programlisting>
below. (import &lt;nixpkgs&gt; {}).runCommand "my-example" {} ''
</para> echo My example command is running
<programlisting>
(import &lt;nixpkgs&gt; {}).runCommand "my-example" {} ''
echo My example command is running
mkdir $out mkdir $out
echo I can write data to the Nix store > $out/message echo I can write data to the Nix store > $out/message
echo I can also run basic commands like: echo I can also run basic commands like:
echo ls echo ls
ls ls
echo whoami echo whoami
whoami whoami
echo date echo date
date date
'' ''
</programlisting> </programlisting>
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
@ -65,41 +61,35 @@
<literal>runCommandCC</literal> <literal>runCommandCC</literal>
</term> </term>
<listitem> <listitem>
<para> <para>
This works just like <literal>runCommand</literal>. The only This works just like <literal>runCommand</literal>. The only difference is
difference is that it also provides a C compiler in that it also provides a C compiler in <literal>buildCommand</literal>s
<literal>buildCommand</literal>s environment. To minimize your environment. To minimize your dependencies, you should only use this if
dependencies, you should only use this if you are sure you will you are sure you will need a C compiler as part of running your command.
need a C compiler as part of running your command.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term> <term>
<literal>writeTextFile</literal>, <literal>writeText</literal>, <literal>writeTextFile</literal>, <literal>writeText</literal>, <literal>writeTextDir</literal>, <literal>writeScript</literal>, <literal>writeScriptBin</literal>
<literal>writeTextDir</literal>, <literal>writeScript</literal>,
<literal>writeScriptBin</literal>
</term> </term>
<listitem> <listitem>
<para> <para>
These functions write <literal>text</literal> to the Nix store. These functions write <literal>text</literal> to the Nix store. This is
This is useful for creating scripts from Nix expressions. useful for creating scripts from Nix expressions.
<literal>writeTextFile</literal> takes an attribute set and <literal>writeTextFile</literal> takes an attribute set and expects two
expects two arguments, <literal>name</literal> and arguments, <literal>name</literal> and <literal>text</literal>.
<literal>text</literal>. <literal>name</literal> corresponds to <literal>name</literal> corresponds to the name used in the Nix store
the name used in the Nix store path. <literal>text</literal> path. <literal>text</literal> will be the contents of the file. You can
will be the contents of the file. You can also set also set <literal>executable</literal> to true to make this file have the
<literal>executable</literal> to true to make this file have executable bit set.
the executable bit set. </para>
</para> <para>
<para> Many more commands wrap <literal>writeTextFile</literal> including
Many more commands wrap <literal>writeTextFile</literal> <literal>writeText</literal>, <literal>writeTextDir</literal>,
including <literal>writeText</literal>, <literal>writeScript</literal>, and <literal>writeScriptBin</literal>.
<literal>writeTextDir</literal>, These are convenience functions over <literal>writeTextFile</literal>.
<literal>writeScript</literal>, and </para>
<literal>writeScriptBin</literal>. These are convenience
functions over <literal>writeTextFile</literal>.
</para>
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
@ -109,16 +99,15 @@
<listitem> <listitem>
<para> <para>
This can be used to put many derivations into the same directory This can be used to put many derivations into the same directory
structure. It works by creating a new derivation and adding structure. It works by creating a new derivation and adding symlinks to
symlinks to each of the paths listed. It expects two arguments, each of the paths listed. It expects two arguments,
<literal>name</literal>, and <literal>paths</literal>. <literal>name</literal>, and <literal>paths</literal>.
<literal>name</literal> is the name used in the Nix store path <literal>name</literal> is the name used in the Nix store path for the
for the created derivation. <literal>paths</literal> is a list of created derivation. <literal>paths</literal> is a list of paths that will
paths that will be symlinked. These paths can be to Nix store be symlinked. These paths can be to Nix store derivations or any other
derivations or any other subdirectory contained within. subdirectory contained within.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
</variablelist> </variablelist>
</section> </section>

View file

@ -185,10 +185,9 @@ with import <nixpkgs> {};
androidenv.emulateApp { androidenv.emulateApp {
name = "emulate-MyAndroidApp"; name = "emulate-MyAndroidApp";
platformVersion = "24"; platformVersion = "28";
abiVersion = "armeabi-v7a"; # mips, x86 or x86_64 abiVersion = "x86_64"; # armeabi-v7a, mips, x86
systemImageType = "default"; systemImageType = "google_apis_playstore";
useGoogleAPIs = false;
} }
``` ```
@ -201,7 +200,7 @@ with import <nixpkgs> {};
androidenv.emulateApp { androidenv.emulateApp {
name = "emulate-MyAndroidApp"; name = "emulate-MyAndroidApp";
platformVersion = "24"; platformVersion = "24";
abiVersion = "armeabi-v7a"; # mips, x86 or x86_64 abiVersion = "armeabi-v7a"; # mips, x86, x86_64
systemImageType = "default"; systemImageType = "default";
useGoogleAPIs = false; useGoogleAPIs = false;
app = ./MyApp.apk; app = ./MyApp.apk;

View file

@ -131,8 +131,8 @@
in <literal>beamPackages</literal>, use the following command: in <literal>beamPackages</literal>, use the following command:
</para> </para>
<programlisting> <screen>
$ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -qaP -A beamPackages <prompt>$ </prompt>nix-env -f &quot;&lt;nixpkgs&gt;&quot; -qaP -A beamPackages
beamPackages.esqlite esqlite-0.2.1 beamPackages.esqlite esqlite-0.2.1
beamPackages.goldrush goldrush-0.1.7 beamPackages.goldrush goldrush-0.1.7
beamPackages.ibrowse ibrowse-4.2.2 beamPackages.ibrowse ibrowse-4.2.2
@ -140,16 +140,16 @@ beamPackages.jiffy jiffy-0.14.5
beamPackages.lager lager-3.0.2 beamPackages.lager lager-3.0.2
beamPackages.meck meck-0.8.3 beamPackages.meck meck-0.8.3
beamPackages.rebar3-pc pc-1.1.0 beamPackages.rebar3-pc pc-1.1.0
</programlisting> </screen>
<para> <para>
To install any of those packages into your profile, refer to them by their To install any of those packages into your profile, refer to them by their
attribute path (first column): attribute path (first column):
</para> </para>
<programlisting> <screen>
$ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse <prompt>$ </prompt>nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</programlisting> </screen>
<para> <para>
The attribute path of any BEAM package corresponds to the name of that The attribute path of any BEAM package corresponds to the name of that
@ -178,22 +178,22 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</para> </para>
<programlisting> <programlisting>
{ stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }: { stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }:
buildRebar3 rec { buildRebar3 rec {
name = "hex2nix"; name = "hex2nix";
version = "0.0.1"; version = "0.0.1";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "ericbmerritt"; owner = "ericbmerritt";
repo = "hex2nix"; repo = "hex2nix";
rev = "${version}"; rev = "${version}";
sha256 = "1w7xjidz1l5yjmhlplfx7kphmnpvqm67w99hd2m7kdixwdxq0zqg"; sha256 = "1w7xjidz1l5yjmhlplfx7kphmnpvqm67w99hd2m7kdixwdxq0zqg";
}; };
beamDeps = [ ibrowse jsx erlware_commons ]; beamDeps = [ ibrowse jsx erlware_commons ];
} }
</programlisting> </programlisting>
<para> <para>
Such derivations are callable with Such derivations are callable with
@ -228,29 +228,29 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</para> </para>
<programlisting> <programlisting>
{ buildErlangMk, fetchHex, cowlib, ranch }: { buildErlangMk, fetchHex, cowlib, ranch }:
buildErlangMk { buildErlangMk {
name = "cowboy"; name = "cowboy";
version = "1.0.4"; version = "1.0.4";
src = fetchHex { src = fetchHex {
pkg = "cowboy"; pkg = "cowboy";
version = "1.0.4"; version = "1.0.4";
sha256 = "6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac"; sha256 = "6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
}; };
beamDeps = [ cowlib ranch ]; beamDeps = [ cowlib ranch ];
meta = { meta = {
description = '' description = ''
Small, fast, modular HTTP server written in Erlang Small, fast, modular HTTP server written in Erlang
''; '';
license = stdenv.lib.licenses.isc; license = stdenv.lib.licenses.isc;
homepage = https://github.com/ninenines/cowboy; homepage = https://github.com/ninenines/cowboy;
}; };
} }
</programlisting> </programlisting>
</section> </section>
<section xml:id="mix-packages"> <section xml:id="mix-packages">
@ -262,56 +262,56 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
</para> </para>
<programlisting> <programlisting>
{ buildMix, fetchHex, plug, absinthe }: { buildMix, fetchHex, plug, absinthe }:
buildMix { buildMix {
name = "absinthe_plug"; name = "absinthe_plug";
version = "1.0.0"; version = "1.0.0";
src = fetchHex { src = fetchHex {
pkg = "absinthe_plug"; pkg = "absinthe_plug";
version = "1.0.0"; version = "1.0.0";
sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33"; sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
}; };
beamDeps = [ plug absinthe ]; beamDeps = [ plug absinthe ];
meta = { meta = {
description = '' description = ''
A plug for Absinthe, an experimental GraphQL toolkit A plug for Absinthe, an experimental GraphQL toolkit
''; '';
license = stdenv.lib.licenses.bsd3; license = stdenv.lib.licenses.bsd3;
homepage = https://github.com/CargoSense/absinthe_plug; homepage = https://github.com/CargoSense/absinthe_plug;
}; };
} }
</programlisting> </programlisting>
<para> <para>
Alternatively, we can use <literal>buildHex</literal> as a shortcut: Alternatively, we can use <literal>buildHex</literal> as a shortcut:
</para> </para>
<programlisting> <programlisting>
{ buildHex, buildMix, plug, absinthe }: { buildHex, buildMix, plug, absinthe }:
buildHex { buildHex {
name = "absinthe_plug"; name = "absinthe_plug";
version = "1.0.0"; version = "1.0.0";
sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33"; sha256 = "08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
builder = buildMix; builder = buildMix;
beamDeps = [ plug absinthe ]; beamDeps = [ plug absinthe ];
meta = { meta = {
description = '' description = ''
A plug for Absinthe, an experimental GraphQL toolkit A plug for Absinthe, an experimental GraphQL toolkit
''; '';
license = stdenv.lib.licenses.bsd3; license = stdenv.lib.licenses.bsd3;
homepage = https://github.com/CargoSense/absinthe_plug; homepage = https://github.com/CargoSense/absinthe_plug;
}; };
} }
</programlisting> </programlisting>
</section> </section>
</section> </section>
</section> </section>
@ -330,47 +330,47 @@ $ nix-env -f &quot;&lt;nixpkgs&gt;&quot; -iA beamPackages.ibrowse
could do the following: could do the following:
</para> </para>
<programlisting> <screen>
$ nix-shell -A beamPackages.ibrowse.env --run "erl" <prompt>$ </prompt><userinput>nix-shell -A beamPackages.ibrowse.env --run "erl"</userinput>
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false] <computeroutput>Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V7.0 (abort with ^G) Eshell V7.0 (abort with ^G)</computeroutput>
1> m(ibrowse). <prompt>1> </prompt><userinput>m(ibrowse).</userinput>
Module: ibrowse <computeroutput>Module: ibrowse
MD5: 3b3e0137d0cbb28070146978a3392945 MD5: 3b3e0137d0cbb28070146978a3392945
Compiled: January 10 2016, 23:34 Compiled: January 10 2016, 23:34
Object file: /nix/store/g1rlf65rdgjs4abbyj4grp37ry7ywivj-ibrowse-4.2.2/lib/erlang/lib/ibrowse-4.2.2/ebin/ibrowse.beam Object file: /nix/store/g1rlf65rdgjs4abbyj4grp37ry7ywivj-ibrowse-4.2.2/lib/erlang/lib/ibrowse-4.2.2/ebin/ibrowse.beam
Compiler options: [{outdir,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/ebin"}, Compiler options: [{outdir,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/ebin"},
debug_info,debug_info,nowarn_shadow_vars, debug_info,debug_info,nowarn_shadow_vars,
warn_unused_import,warn_unused_vars,warnings_as_errors, warn_unused_import,warn_unused_vars,warnings_as_errors,
{i,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/include"}] {i,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/include"}]
Exports: Exports:
add_config/1 send_req_direct/7 add_config/1 send_req_direct/7
all_trace_off/0 set_dest/3 all_trace_off/0 set_dest/3
code_change/3 set_max_attempts/3 code_change/3 set_max_attempts/3
get_config_value/1 set_max_pipeline_size/3 get_config_value/1 set_max_pipeline_size/3
get_config_value/2 set_max_sessions/3 get_config_value/2 set_max_sessions/3
get_metrics/0 show_dest_status/0 get_metrics/0 show_dest_status/0
get_metrics/2 show_dest_status/1 get_metrics/2 show_dest_status/1
handle_call/3 show_dest_status/2 handle_call/3 show_dest_status/2
handle_cast/2 spawn_link_worker_process/1 handle_cast/2 spawn_link_worker_process/1
handle_info/2 spawn_link_worker_process/2 handle_info/2 spawn_link_worker_process/2
init/1 spawn_worker_process/1 init/1 spawn_worker_process/1
module_info/0 spawn_worker_process/2 module_info/0 spawn_worker_process/2
module_info/1 start/0 module_info/1 start/0
rescan_config/0 start_link/0 rescan_config/0 start_link/0
rescan_config/1 stop/0 rescan_config/1 stop/0
send_req/3 stop_worker_process/1 send_req/3 stop_worker_process/1
send_req/4 stream_close/1 send_req/4 stream_close/1
send_req/5 stream_next/1 send_req/5 stream_next/1
send_req/6 terminate/2 send_req/6 terminate/2
send_req_direct/4 trace_off/0 send_req_direct/4 trace_off/0
send_req_direct/5 trace_off/2 send_req_direct/5 trace_off/2
send_req_direct/6 trace_on/0 send_req_direct/6 trace_on/0
trace_on/2 trace_on/2
ok ok</computeroutput>
2> <prompt>2></prompt>
</programlisting> </screen>
<para> <para>
Notice the <literal>-A beamPackages.ibrowse.env</literal>. That is the key Notice the <literal>-A beamPackages.ibrowse.env</literal>. That is the key
@ -408,7 +408,7 @@ let
in in
drv drv
</programlisting> </programlisting>
<section xml:id="building-in-a-shell"> <section xml:id="building-in-a-shell">
<title>Building in a Shell (for Mix Projects)</title> <title>Building in a Shell (for Mix Projects)</title>
@ -474,7 +474,7 @@ plt:
analyze: build plt analyze: build plt
$(NIX_SHELL) --run "mix dialyzer --no-compile" $(NIX_SHELL) --run "mix dialyzer --no-compile"
</programlisting> </programlisting>
<para> <para>
Using a <literal>shell.nix</literal> as described (see Using a <literal>shell.nix</literal> as described (see
@ -513,9 +513,9 @@ analyze: build plt
<literal>nixpkgs</literal> repository: <literal>nixpkgs</literal> repository:
</para> </para>
<programlisting> <screen>
$ nix-build -A beamPackages <prompt>$ </prompt>nix-build -A beamPackages
</programlisting> </screen>
<para> <para>
That will attempt to build every package in <literal>beamPackages</literal>. That will attempt to build every package in <literal>beamPackages</literal>.

View file

@ -0,0 +1,71 @@
# Crystal
## Building a Crystal package
This section uses [Mint](https://github.com/mint-lang/mint) as an example for how to build a Crystal package.
If the Crystal project has any dependencies, the first step is to get a `shards.nix` file encoding those. Get a copy of the project and go to its root directory such that its `shard.lock` file is in the current directory, then run `crystal2nix` in it
```bash
$ git clone https://github.com/mint-lang/mint
$ cd mint
$ git checkout 0.5.0
$ nix-shell -p crystal2nix --run crystal2nix
```
This should have generated a `shards.nix` file.
Next create a Nix file for your derivation and use `pkgs.crystal.buildCrystalPackage` as follows:
```nix
with import <nixpkgs> {};
crystal.buildCrystalPackage rec {
pname = "mint";
version = "0.5.0";
src = fetchFromGitHub {
owner = "mint-lang";
repo = "mint";
rev = version;
sha256 = "0vxbx38c390rd2ysvbwgh89v2232sh5rbsp3nk9wzb70jybpslvl";
};
# Insert the path to your shards.nix file here
shardsFile = ./shards.nix;
...
}
```
This won't build anything yet, because we haven't told it what files build. We can specify a mapping from binary names to source files with the `crystalBinaries` attribute. The project's compilation instructions should show this. For Mint, the binary is called "mint", which is compiled from the source file `src/mint.cr`, so we'll specify this as follows:
```nix
crystalBinaries.mint.src = "src/mint.cr";
# ...
```
Additionally you can override the default `crystal build` options (which are currently `--release --progress --no-debug --verbose`) with
```nix
crystalBinaries.mint.options = [ "--release" "--verbose" ];
```
Depending on the project, you might need additional steps to get it to compile successfully. In Mint's case, we need to link against openssl, so in the end the Nix file looks as follows:
```nix
with import <nixpkgs> {};
crystal.buildCrystalPackage rec {
version = "0.5.0";
pname = "mint";
src = fetchFromGitHub {
owner = "mint-lang";
repo = "mint";
rev = version;
sha256 = "0vxbx38c390rd2ysvbwgh89v2232sh5rbsp3nk9wzb70jybpslvl";
};
shardsFile = ./shards.nix;
crystalBinaries.mint.src = "src/mint.cr";
buildInputs = [ openssl_1_0_2 ];
}
```

View file

@ -3,12 +3,91 @@
xml:id="sec-language-go"> xml:id="sec-language-go">
<title>Go</title> <title>Go</title>
<para> <section xml:id="ssec-go-modules">
The function <varname>buildGoPackage</varname> builds standard Go programs. <title>Go modules</title>
</para>
<example xml:id='ex-buildGoPackage'> <para>
<title>buildGoPackage</title> The function <varname> buildGoModule </varname> builds Go programs managed
with Go modules. It builds a
<link xlink:href="https://github.com/golang/go/wiki/Modules">Go
modules</link> through a two phase build:
<itemizedlist>
<listitem>
<para>
An intermediate fetcher derivation. This derivation will be used to fetch
all of the dependencies of the Go module.
</para>
</listitem>
<listitem>
<para>
A final derivation will use the output of the intermediate derivation to
build the binaries and produce the final output.
</para>
</listitem>
</itemizedlist>
</para>
<example xml:id='ex-buildGoModule'>
<title>buildGoModule</title>
<programlisting>
pet = buildGoModule rec {
name = "pet-${version}";
version = "0.3.4";
src = fetchFromGitHub {
owner = "knqyf263";
repo = "pet";
rev = "v${version}";
sha256 = "0m2fzpqxk7hrbxsgqplkg7h2p7gv6s1miymv3gvw0cz039skag0s";
};
modSha256 = "1879j77k96684wi554rkjxydrj8g3hpp0kvxz03sd8dmwr3lh83j"; <co xml:id='ex-buildGoModule-1' />
subPackages = [ "." ]; <co xml:id='ex-buildGoModule-2' />
meta = with lib; {
description = "Simple command-line snippet manager, written in Go";
homepage = https://github.com/knqyf263/pet;
license = licenses.mit;
maintainers = with maintainers; [ kalbasit ];
platforms = platforms.linux ++ platforms.darwin;
};
}
</programlisting>
</example>
<para>
<xref linkend='ex-buildGoModule'/> is an example expression using
buildGoModule, the following arguments are of special significance to the
function:
<calloutlist>
<callout arearefs='ex-buildGoModule-1'>
<para>
<varname>modSha256</varname> is the hash of the output of the
intermediate fetcher derivation.
</para>
</callout>
<callout arearefs='ex-buildGoModule-2'>
<para>
<varname>subPackages</varname> limits the builder from building child
packages that have not been listed. If <varname>subPackages</varname> is
not specified, all child packages will be built.
</para>
</callout>
</calloutlist>
</para>
</section>
<section xml:id="ssec-go-legacy">
<title>Go legacy</title>
<para>
The function <varname> buildGoPackage </varname> builds legacy Go programs,
not supporting Go modules.
</para>
<example xml:id='ex-buildGoPackage'>
<title>buildGoPackage</title>
<programlisting> <programlisting>
deis = buildGoPackage rec { deis = buildGoPackage rec {
name = "deis-${version}"; name = "deis-${version}";
@ -29,56 +108,56 @@ deis = buildGoPackage rec {
buildFlags = "--tags release"; <co xml:id='ex-buildGoPackage-4' /> buildFlags = "--tags release"; <co xml:id='ex-buildGoPackage-4' />
} }
</programlisting> </programlisting>
</example> </example>
<para> <para>
<xref linkend='ex-buildGoPackage'/> is an example expression using <xref linkend='ex-buildGoPackage'/> is an example expression using
buildGoPackage, the following arguments are of special significance to the buildGoPackage, the following arguments are of special significance to the
function: function:
<calloutlist> <calloutlist>
<callout arearefs='ex-buildGoPackage-1'> <callout arearefs='ex-buildGoPackage-1'>
<para> <para>
<varname>goPackagePath</varname> specifies the package's canonical Go <varname>goPackagePath</varname> specifies the package's canonical Go
import path. import path.
</para> </para>
</callout> </callout>
<callout arearefs='ex-buildGoPackage-2'> <callout arearefs='ex-buildGoPackage-2'>
<para> <para>
<varname>subPackages</varname> limits the builder from building child <varname>subPackages</varname> limits the builder from building child
packages that have not been listed. If <varname>subPackages</varname> is packages that have not been listed. If <varname>subPackages</varname> is
not specified, all child packages will be built. not specified, all child packages will be built.
</para> </para>
<para> <para>
In this example only <literal>github.com/deis/deis/client</literal> will In this example only <literal>github.com/deis/deis/client</literal> will
be built. be built.
</para> </para>
</callout> </callout>
<callout arearefs='ex-buildGoPackage-3'> <callout arearefs='ex-buildGoPackage-3'>
<para> <para>
<varname>goDeps</varname> is where the Go dependencies of a Go program are <varname>goDeps</varname> is where the Go dependencies of a Go program
listed as a list of package source identified by Go import path. It could are listed as a list of package source identified by Go import path. It
be imported as a separate <varname>deps.nix</varname> file for could be imported as a separate <varname>deps.nix</varname> file for
readability. The dependency data structure is described below. readability. The dependency data structure is described below.
</para> </para>
</callout> </callout>
<callout arearefs='ex-buildGoPackage-4'> <callout arearefs='ex-buildGoPackage-4'>
<para> <para>
<varname>buildFlags</varname> is a list of flags passed to the go build <varname>buildFlags</varname> is a list of flags passed to the go build
command. command.
</para> </para>
</callout> </callout>
</calloutlist> </calloutlist>
</para> </para>
<para> <para>
The <varname>goDeps</varname> attribute can be imported from a separate The <varname>goDeps</varname> attribute can be imported from a separate
<varname>nix</varname> file that defines which Go libraries are needed and <varname>nix</varname> file that defines which Go libraries are needed and
should be included in <varname>GOPATH</varname> for should be included in <varname>GOPATH</varname> for
<varname>buildPhase</varname>. <varname>buildPhase</varname>.
</para> </para>
<example xml:id='ex-goDeps'> <example xml:id='ex-goDeps'>
<title>deps.nix</title> <title>deps.nix</title>
<programlisting> <programlisting>
[ <co xml:id='ex-goDeps-1' /> [ <co xml:id='ex-goDeps-1' />
{ {
@ -101,60 +180,62 @@ deis = buildGoPackage rec {
} }
] ]
</programlisting> </programlisting>
</example> </example>
<para> <para>
<calloutlist> <calloutlist>
<callout arearefs='ex-goDeps-1'> <callout arearefs='ex-goDeps-1'>
<para> <para>
<varname>goDeps</varname> is a list of Go dependencies. <varname>goDeps</varname> is a list of Go dependencies.
</para> </para>
</callout> </callout>
<callout arearefs='ex-goDeps-2'> <callout arearefs='ex-goDeps-2'>
<para> <para>
<varname>goPackagePath</varname> specifies Go package import path. <varname>goPackagePath</varname> specifies Go package import path.
</para> </para>
</callout> </callout>
<callout arearefs='ex-goDeps-3'> <callout arearefs='ex-goDeps-3'>
<para> <para>
<varname>fetch type</varname> that needs to be used to get package source. <varname>fetch type</varname> that needs to be used to get package
If <varname>git</varname> is used there should be <varname>url</varname>, source. If <varname>git</varname> is used there should be
<varname>rev</varname> and <varname>sha256</varname> defined next to it. <varname>url</varname>, <varname>rev</varname> and
</para> <varname>sha256</varname> defined next to it.
</callout> </para>
</calloutlist> </callout>
</para> </calloutlist>
</para>
<para> <para>
To extract dependency information from a Go package in automated way use To extract dependency information from a Go package in automated way use
<link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>. It can <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>. It can
produce complete derivation and <varname>goDeps</varname> file for Go produce complete derivation and <varname>goDeps</varname> file for Go
programs. programs.
</para> </para>
<para> <para>
<varname>buildGoPackage</varname> produces <varname>buildGoPackage</varname> produces
<xref linkend='chap-multiple-output' xrefstyle="select: title" /> where <xref linkend='chap-multiple-output' xrefstyle="select: title" /> where
<varname>bin</varname> includes program binaries. You can test build a Go <varname>bin</varname> includes program binaries. You can test build a Go
binary as follows: binary as follows:
<screen> <screen>
$ nix-build -A deis.bin <prompt>$ </prompt>nix-build -A deis.bin
</screen> </screen>
or build all outputs with: or build all outputs with:
<screen> <screen>
$ nix-build -A deis.all <prompt>$ </prompt>nix-build -A deis.all
</screen> </screen>
<varname>bin</varname> output will be installed by default with <varname>bin</varname> output will be installed by default with
<varname>nix-env -i</varname> or <varname>systemPackages</varname>. <varname>nix-env -i</varname> or <varname>systemPackages</varname>.
</para> </para>
<para> <para>
You may use Go packages installed into the active Nix profiles by adding the You may use Go packages installed into the active Nix profiles by adding the
following to your ~/.bashrc: following to your ~/.bashrc:
<screen> <screen>
for p in $NIX_PROFILES; do for p in $NIX_PROFILES; do
GOPATH="$p/share/go:$GOPATH" GOPATH="$p/share/go:$GOPATH"
done done
</screen> </screen>
</para> </para>
</section>
</section> </section>

View file

@ -55,7 +55,7 @@ package `haskell-pandoc`, for example, installs both a library and an
application. You can install and use Haskell executables just like any other application. You can install and use Haskell executables just like any other
program in Nixpkgs, but using Haskell libraries for development is a bit program in Nixpkgs, but using Haskell libraries for development is a bit
trickier and we'll address that subject in great detail in section [How to trickier and we'll address that subject in great detail in section [How to
create a development environment]. create a development environment](#how-to-create-a-development-environment).
Attribute paths are deterministic inside of Nixpkgs, but the path necessary to Attribute paths are deterministic inside of Nixpkgs, but the path necessary to
reach Nixpkgs varies from system to system. We dodged that problem by giving reach Nixpkgs varies from system to system. We dodged that problem by giving
@ -127,7 +127,7 @@ Also, the attributes `haskell.compiler.ghcXYC` and
A simple development environment consists of a Haskell compiler and one or both A simple development environment consists of a Haskell compiler and one or both
of the tools `cabal-install` and `stack`. We saw in section of the tools `cabal-install` and `stack`. We saw in section
[How to install Haskell packages] how you can install those programs into your [How to install Haskell packages](#how-to-install-haskell-packages) how you can install those programs into your
user profile: user profile:
```shell ```shell
nix-env -f "<nixpkgs>" -iA haskellPackages.ghc haskellPackages.cabal-install nix-env -f "<nixpkgs>" -iA haskellPackages.ghc haskellPackages.cabal-install
@ -162,7 +162,7 @@ nix-shell -p haskell.compiler.ghc784
to bring GHC 7.8.4 into `$PATH`. Alternatively, you can use Stack instead of to bring GHC 7.8.4 into `$PATH`. Alternatively, you can use Stack instead of
`nix-shell` directly to select compiler versions and other build tools `nix-shell` directly to select compiler versions and other build tools
per-project. It uses `nix-shell` under the hood when Nix support is turned on. per-project. It uses `nix-shell` under the hood when Nix support is turned on.
See [How to build a Haskell project using Stack]. See [How to build a Haskell project using Stack](#how-to-build-a-haskell-project-using-stack).
If you're using `cabal-install`, re-running `cabal configure` inside the spawned If you're using `cabal-install`, re-running `cabal configure` inside the spawned
shell switches your build to use that compiler instead. If you're working on shell switches your build to use that compiler instead. If you're working on
@ -366,7 +366,7 @@ automatically select the right version of GHC and other build tools to build,
test and execute apps in an existing project downloaded from somewhere on the test and execute apps in an existing project downloaded from somewhere on the
Internet. Pass the `--nix` flag to any `stack` command to do so, e.g. Internet. Pass the `--nix` flag to any `stack` command to do so, e.g.
```shell ```shell
git clone --recursive http://github.com/yesodweb/wai git clone --recursive https://github.com/yesodweb/wai
cd wai cd wai
stack --nix build stack --nix build
``` ```
@ -953,7 +953,7 @@ is essentially a "free software" license (BSD3), according to
paragraph 2 of the LGPL, GHC must be distributed under the terms of the LGPL! paragraph 2 of the LGPL, GHC must be distributed under the terms of the LGPL!
To work around these problems GHC can be build with a slower but LGPL-free To work around these problems GHC can be build with a slower but LGPL-free
alternative implemention for Integer called alternative implementation for Integer called
[integer-simple](http://hackage.haskell.org/package/integer-simple). [integer-simple](http://hackage.haskell.org/package/integer-simple).
To get a GHC compiler build with `integer-simple` instead of `integer-gmp` use To get a GHC compiler build with `integer-simple` instead of `integer-gmp` use

View file

@ -11,10 +11,21 @@ $ # On non-NixOS
$ nix-env -i nixpkgs.idris $ nix-env -i nixpkgs.idris
``` ```
This however only provides the `prelude` and `base` libraries. To install additional libraries: This however only provides the `prelude` and `base` libraries. To install idris with additional libraries, you can use the `idrisPackages.with-packages` function, e.g. in an overlay in `~/.config/nixpkgs/overlays/my-idris.nix`:
```nix
self: super: {
myIdris = with self.idrisPackages; with-packages [ contrib pruviloj ];
}
```
And then:
``` ```
$ nix-env -iE 'pkgs: pkgs.idrisPackages.with-packages (with pkgs.idrisPackages; [ contrib pruviloj ])' $ # On NixOS
$ nix-env -iA nixos.myIdris
$ # On non-NixOS
$ nix-env -iA nixpkgs.myIdris
``` ```
To see all available Idris packages: To see all available Idris packages:
@ -113,3 +124,21 @@ in another file (say `default.nix`) to be able to build it with
``` ```
$ nix-build -A yaml $ nix-build -A yaml
``` ```
## Passing options to `idris` commands
The `build-idris-package` function provides also optional input values to set additional options for the used `idris` commands.
Specifically, you can set `idrisBuildOptions`, `idrisTestOptions`, `idrisInstallOptions` and `idrisDocOptions` to provide additional options to the `idris` command respectively when building, testing, installing and generating docs for your package.
For example you could set
```
build-idris-package {
idrisBuildOptions = [ "--log" "1" "--verbose" ]
...
}
```
to require verbose output during `idris` build phase.

View file

@ -32,4 +32,5 @@
<xi:include href="titanium.section.xml" /> <xi:include href="titanium.section.xml" />
<xi:include href="vim.section.xml" /> <xi:include href="vim.section.xml" />
<xi:include href="emscripten.section.xml" /> <xi:include href="emscripten.section.xml" />
<xi:include href="crystal.section.xml" />
</chapter> </chapter>

View file

@ -10,7 +10,7 @@ stdenv.mkDerivation {
name = "..."; name = "...";
src = fetchurl { ... }; src = fetchurl { ... };
buildInputs = [ jdk ant ]; nativeBuildInputs = [ jdk ant ];
buildPhase = "ant"; buildPhase = "ant";
} }
@ -30,7 +30,8 @@ stdenv.mkDerivation {
<filename>foo.jar</filename> in its <filename>share/java</filename> <filename>foo.jar</filename> in its <filename>share/java</filename>
directory, and another package declares the attribute directory, and another package declares the attribute
<programlisting> <programlisting>
buildInputs = [ jdk libfoo ]; buildInputs = [ libfoo ];
nativeBuildInputs = [ jdk ];
</programlisting> </programlisting>
then <envar>CLASSPATH</envar> will be set to then <envar>CLASSPATH</envar> will be set to
<filename>/nix/store/...-libfoo/share/java/foo.jar</filename>. <filename>/nix/store/...-libfoo/share/java/foo.jar</filename>.
@ -46,7 +47,7 @@ buildInputs = [ jdk libfoo ];
script to run it using the OpenJRE. You can use script to run it using the OpenJRE. You can use
<literal>makeWrapper</literal> for this: <literal>makeWrapper</literal> for this:
<programlisting> <programlisting>
buildInputs = [ makeWrapper ]; nativeBuildInputs = [ makeWrapper ];
installPhase = installPhase =
'' ''
@ -68,7 +69,7 @@ installPhase =
can be done in a generic fashion with the <literal>--set</literal> argument can be done in a generic fashion with the <literal>--set</literal> argument
of <literal>makeWrapper</literal>: of <literal>makeWrapper</literal>:
<programlisting> <programlisting>
--set JAVA_HOME ${jdk.home} --set JAVA_HOME ${jdk.home}
</programlisting> </programlisting>
</para> </para>
@ -76,7 +77,7 @@ installPhase =
It is possible to use a different Java compiler than <command>javac</command> It is possible to use a different Java compiler than <command>javac</command>
from the OpenJDK. For instance, to use the GNU Java Compiler: from the OpenJDK. For instance, to use the GNU Java Compiler:
<programlisting> <programlisting>
buildInputs = [ gcj ant ]; nativeBuildInputs = [ gcj ant ];
</programlisting> </programlisting>
Here, Ant will automatically use <command>gij</command> (the GNU Java Here, Ant will automatically use <command>gij</command> (the GNU Java
Runtime) instead of the OpenJRE. Runtime) instead of the OpenJRE.

View file

@ -29,7 +29,7 @@ fileSystem = buildLuaPackage {
maintainers = with maintainers; [ flosse ]; maintainers = with maintainers; [ flosse ];
}; };
}; };
</programlisting> </programlisting>
</para> </para>
<para> <para>

View file

@ -4,39 +4,38 @@
<title>OCaml</title> <title>OCaml</title>
<para> <para>
OCaml libraries should be installed in OCaml libraries should be installed in
<literal>$(out)/lib/ocaml/${ocaml.version}/site-lib/</literal>. Such <literal>$(out)/lib/ocaml/${ocaml.version}/site-lib/</literal>. Such
directories are automatically added to the <literal>$OCAMLPATH</literal> directories are automatically added to the <literal>$OCAMLPATH</literal>
environment variable when building another package that depends on them environment variable when building another package that depends on them or
or when opening a <literal>nix-shell</literal>. when opening a <literal>nix-shell</literal>.
</para> </para>
<para> <para>
Given that most of the OCaml ecosystem is now built with dune, Given that most of the OCaml ecosystem is now built with dune, nixpkgs
nixpkgs includes a convenience build support function called includes a convenience build support function called
<literal>buildDunePackage</literal> that will build an OCaml package <literal>buildDunePackage</literal> that will build an OCaml package using
using dune, OCaml and findlib and any additional dependencies provided dune, OCaml and findlib and any additional dependencies provided as
as <literal>buildInputs</literal> or <literal>propagatedBuildInputs</literal>. <literal>buildInputs</literal> or <literal>propagatedBuildInputs</literal>.
</para> </para>
<para> <para>
Here is a simple package example. It defines an (optional) attribute Here is a simple package example. It defines an (optional) attribute
<literal>minimumOCamlVersion</literal> that will be used to throw a <literal>minimumOCamlVersion</literal> that will be used to throw a
descriptive evaluation error if building with an older OCaml is attempted. descriptive evaluation error if building with an older OCaml is attempted. It
It uses the <literal>fetchFromGitHub</literal> fetcher to get its source. uses the <literal>fetchFromGitHub</literal> fetcher to get its source. It
It sets the <literal>doCheck</literal> (optional) attribute to sets the <literal>doCheck</literal> (optional) attribute to
<literal>true</literal> which means that tests will be run with <literal>true</literal> which means that tests will be run with <literal>dune
<literal>dune runtest -p angstrom</literal> after the build runtest -p angstrom</literal> after the build (<literal>dune build -p
(<literal>dune build -p angstrom</literal>) is complete. angstrom</literal>) is complete. It uses <literal>alcotest</literal> as a
It uses <literal>alcotest</literal> as a build input (because it is needed build input (because it is needed to run the tests) and
to run the tests) and <literal>bigstringaf</literal> and <literal>bigstringaf</literal> and <literal>result</literal> as propagated
<literal>result</literal> as propagated build inputs (thus they will also build inputs (thus they will also be available to libraries depending on this
be available to libraries depending on this library). library). The library will be installed using the
The library will be installed using the <literal>angstrom.install</literal> <literal>angstrom.install</literal> file that dune generates.
file that dune generates.
</para> </para>
<programlisting> <programlisting>
{ stdenv, fetchFromGitHub, buildDunePackage, alcotest, result, bigstringaf }: { stdenv, fetchFromGitHub, buildDunePackage, alcotest, result, bigstringaf }:
buildDunePackage rec { buildDunePackage rec {
@ -63,17 +62,17 @@ buildDunePackage rec {
maintainers = with stdenv.lib.maintainers; [ sternenseemann ]; maintainers = with stdenv.lib.maintainers; [ sternenseemann ];
}; };
} }
</programlisting> </programlisting>
<para> <para>
Here is a second example, this time using a source archive generated with Here is a second example, this time using a source archive generated with
<literal>dune-release</literal>. It is a good idea to use this archive when <literal>dune-release</literal>. It is a good idea to use this archive when
it is available as it will usually contain substituted variables such as a it is available as it will usually contain substituted variables such as a
<literal>%%VERSION%%</literal> field. This library does not depend <literal>%%VERSION%%</literal> field. This library does not depend on any
on any other OCaml library and no tests are run after building it. other OCaml library and no tests are run after building it.
</para> </para>
<programlisting> <programlisting>
{ stdenv, fetchurl, buildDunePackage }: { stdenv, fetchurl, buildDunePackage }:
buildDunePackage rec { buildDunePackage rec {
@ -94,6 +93,5 @@ buildDunePackage rec {
maintainers = [ maintainers.eqyiel ]; maintainers = [ maintainers.eqyiel ];
}; };
} }
</programlisting> </programlisting>
</section> </section>

View file

@ -47,13 +47,13 @@ foo = import ../path/to/foo.nix {
in <filename>all-packages.nix</filename>. You can test building a Perl in <filename>all-packages.nix</filename>. You can test building a Perl
package as follows: package as follows:
<screen> <screen>
$ nix-build -A perlPackages.ClassC3 <prompt>$ </prompt>nix-build -A perlPackages.ClassC3
</screen> </screen>
<varname>buildPerlPackage</varname> adds <literal>perl-</literal> to the <varname>buildPerlPackage</varname> adds <literal>perl-</literal> to the
start of the name attribute, so the package above is actually called start of the name attribute, so the package above is actually called
<literal>perl-Class-C3-0.21</literal>. So to install it, you can say: <literal>perl-Class-C3-0.21</literal>. So to install it, you can say:
<screen> <screen>
$ nix-env -i perl-Class-C3 <prompt>$ </prompt>nix-env -i perl-Class-C3
</screen> </screen>
(Of course you can also install using the attribute name: <literal>nix-env -i (Of course you can also install using the attribute name: <literal>nix-env -i
-A perlPackages.ClassC3</literal>.) -A perlPackages.ClassC3</literal>.)
@ -75,7 +75,8 @@ $ nix-env -i perl-Class-C3
It adds the contents of the <envar>PERL5LIB</envar> environment variable It adds the contents of the <envar>PERL5LIB</envar> environment variable
to <literal>#! .../bin/perl</literal> line of Perl scripts as to <literal>#! .../bin/perl</literal> line of Perl scripts as
<literal>-I<replaceable>dir</replaceable></literal> flags. This ensures <literal>-I<replaceable>dir</replaceable></literal> flags. This ensures
that a script can find its dependencies. that a script can find its dependencies. (This can cause this shebang line
to become too long for Darwin to handle; see the note below.)
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -137,6 +138,36 @@ ClassC3Componentised = buildPerlPackage rec {
</programlisting> </programlisting>
</para> </para>
<para>
On Darwin, if a script has too many
<literal>-I<replaceable>dir</replaceable></literal> flags in its first line
(its “shebang line”), it will not run. This can be worked around by calling
the <literal>shortenPerlShebang</literal> function from the
<literal>postInstall</literal> phase:
<programlisting>
{ stdenv, buildPerlPackage, fetchurl, shortenPerlShebang }:
ImageExifTool = buildPerlPackage {
pname = "Image-ExifTool";
version = "11.50";
src = fetchurl {
url = "https://www.sno.phy.queensu.ca/~phil/exiftool/Image-ExifTool-11.50.tar.gz";
sha256 = "0d8v48y94z8maxkmw1rv7v9m0jg2dc8xbp581njb6yhr7abwqdv3";
};
buildInputs = stdenv.lib.optional stdenv.isDarwin shortenPerlShebang;
postInstall = stdenv.lib.optional stdenv.isDarwin ''
shortenPerlShebang $out/bin/exiftool
'';
};
</programlisting>
This will remove the <literal>-I</literal> flags from the shebang line,
rewrite them in the <literal>use lib</literal> form, and put them on the next
line instead. This function can be given any number of Perl scripts as
arguments; it will modify them in-place.
</para>
<section xml:id="ssec-generation-from-CPAN"> <section xml:id="ssec-generation-from-CPAN">
<title>Generation from CPAN</title> <title>Generation from CPAN</title>
@ -148,7 +179,7 @@ ClassC3Componentised = buildPerlPackage rec {
</para> </para>
<screen> <screen>
$ nix-env -i nix-generate-from-cpan <prompt>$ </prompt>nix-env -i nix-generate-from-cpan
</screen> </screen>
<para> <para>
@ -156,7 +187,7 @@ $ nix-env -i nix-generate-from-cpan
unpacks the corresponding package, and prints a Nix expression on standard unpacks the corresponding package, and prints a Nix expression on standard
output. For example: output. For example:
<screen> <screen>
$ nix-generate-from-cpan XML::Simple <prompt>$ </prompt>nix-generate-from-cpan XML::Simple
XMLSimple = buildPerlPackage rec { XMLSimple = buildPerlPackage rec {
name = "XML-Simple-2.22"; name = "XML-Simple-2.22";
src = fetchurl { src = fetchurl {

View file

@ -188,23 +188,22 @@ building Python libraries is `buildPythonPackage`. Let's see how we can build th
```nix ```nix
{ lib, buildPythonPackage, fetchPypi }: { lib, buildPythonPackage, fetchPypi }:
toolz = buildPythonPackage rec { buildPythonPackage rec {
pname = "toolz"; pname = "toolz";
version = "0.7.4"; version = "0.7.4";
src = fetchPypi { src = fetchPypi {
inherit pname version; inherit pname version;
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd"; sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
}; };
doCheck = false; doCheck = false;
meta = with lib; { meta = with lib; {
homepage = https://github.com/pytoolz/toolz; homepage = https://github.com/pytoolz/toolz;
description = "List processing tools and functional utilities"; description = "List processing tools and functional utilities";
license = licenses.bsd3; license = licenses.bsd3;
maintainers = with maintainers; [ fridh ]; maintainers = with maintainers; [ fridh ];
};
}; };
} }
``` ```
@ -279,32 +278,31 @@ The following example shows which arguments are given to `buildPythonPackage` in
order to build [`datashape`](https://github.com/blaze/datashape). order to build [`datashape`](https://github.com/blaze/datashape).
```nix ```nix
{ # ... { lib, buildPythonPackage, fetchPypi, numpy, multipledispatch, dateutil, pytest }:
datashape = buildPythonPackage rec { buildPythonPackage rec {
pname = "datashape"; pname = "datashape";
version = "0.4.7"; version = "0.4.7";
src = fetchPypi { src = fetchPypi {
inherit pname version; inherit pname version;
sha256 = "14b2ef766d4c9652ab813182e866f493475e65e558bed0822e38bf07bba1a278"; sha256 = "14b2ef766d4c9652ab813182e866f493475e65e558bed0822e38bf07bba1a278";
}; };
checkInputs = with self; [ pytest ]; checkInputs = [ pytest ];
propagatedBuildInputs = with self; [ numpy multipledispatch dateutil ]; propagatedBuildInputs = [ numpy multipledispatch dateutil ];
meta = with lib; { meta = with lib; {
homepage = https://github.com/ContinuumIO/datashape; homepage = https://github.com/ContinuumIO/datashape;
description = "A data description language"; description = "A data description language";
license = licenses.bsd2; license = licenses.bsd2;
maintainers = with maintainers; [ fridh ]; maintainers = with maintainers; [ fridh ];
};
}; };
} }
``` ```
We can see several runtime dependencies, `numpy`, `multipledispatch`, and We can see several runtime dependencies, `numpy`, `multipledispatch`, and
`dateutil`. Furthermore, we have one `buildInput`, i.e. `pytest`. `pytest` is a `dateutil`. Furthermore, we have one `checkInputs`, i.e. `pytest`. `pytest` is a
test runner and is only used during the `checkPhase` and is therefore not added test runner and is only used during the `checkPhase` and is therefore not added
to `propagatedBuildInputs`. to `propagatedBuildInputs`.
@ -314,25 +312,24 @@ Python bindings to `libxml2` and `libxslt`. These libraries are only required
when building the bindings and are therefore added as `buildInputs`. when building the bindings and are therefore added as `buildInputs`.
```nix ```nix
{ # ... { lib, pkgs, buildPythonPackage, fetchPypi }:
lxml = buildPythonPackage rec { buildPythonPackage rec {
pname = "lxml"; pname = "lxml";
version = "3.4.4"; version = "3.4.4";
src = fetchPypi { src = fetchPypi {
inherit pname version; inherit pname version;
sha256 = "16a0fa97hym9ysdk3rmqz32xdjqmy4w34ld3rm3jf5viqjx65lxk"; sha256 = "16a0fa97hym9ysdk3rmqz32xdjqmy4w34ld3rm3jf5viqjx65lxk";
}; };
buildInputs = with self; [ pkgs.libxml2 pkgs.libxslt ]; buildInputs = [ pkgs.libxml2 pkgs.libxslt ];
meta = with lib; { meta = with lib; {
description = "Pythonic binding for the libxml2 and libxslt libraries"; description = "Pythonic binding for the libxml2 and libxslt libraries";
homepage = https://lxml.de; homepage = https://lxml.de;
license = licenses.bsd3; license = licenses.bsd3;
maintainers = with maintainers; [ sjourdois ]; maintainers = with maintainers; [ sjourdois ];
};
}; };
} }
``` ```
@ -348,35 +345,34 @@ find each of them in a different folder, and therefore we have to set `LDFLAGS`
and `CFLAGS`. and `CFLAGS`.
```nix ```nix
{ # ... { lib, pkgs, buildPythonPackage, fetchPypi, numpy, scipy }:
pyfftw = buildPythonPackage rec { buildPythonPackage rec {
pname = "pyFFTW"; pname = "pyFFTW";
version = "0.9.2"; version = "0.9.2";
src = fetchPypi { src = fetchPypi {
inherit pname version; inherit pname version;
sha256 = "f6bbb6afa93085409ab24885a1a3cdb8909f095a142f4d49e346f2bd1b789074"; sha256 = "f6bbb6afa93085409ab24885a1a3cdb8909f095a142f4d49e346f2bd1b789074";
}; };
buildInputs = [ pkgs.fftw pkgs.fftwFloat pkgs.fftwLongDouble]; buildInputs = [ pkgs.fftw pkgs.fftwFloat pkgs.fftwLongDouble];
propagatedBuildInputs = with self; [ numpy scipy ]; propagatedBuildInputs = [ numpy scipy ];
# Tests cannot import pyfftw. pyfftw works fine though. # Tests cannot import pyfftw. pyfftw works fine though.
doCheck = false; doCheck = false;
preConfigure = '' preConfigure = ''
export LDFLAGS="-L${pkgs.fftw.dev}/lib -L${pkgs.fftwFloat.out}/lib -L${pkgs.fftwLongDouble.out}/lib" export LDFLAGS="-L${pkgs.fftw.dev}/lib -L${pkgs.fftwFloat.out}/lib -L${pkgs.fftwLongDouble.out}/lib"
export CFLAGS="-I${pkgs.fftw.dev}/include -I${pkgs.fftwFloat.dev}/include -I${pkgs.fftwLongDouble.dev}/include" export CFLAGS="-I${pkgs.fftw.dev}/include -I${pkgs.fftwFloat.dev}/include -I${pkgs.fftwLongDouble.dev}/include"
''; '';
meta = with lib; { meta = with lib; {
description = "A pythonic wrapper around FFTW, the FFT library, presenting a unified interface for all the supported transforms"; description = "A pythonic wrapper around FFTW, the FFT library, presenting a unified interface for all the supported transforms";
homepage = http://hgomersall.github.com/pyFFTW; homepage = http://hgomersall.github.com/pyFFTW;
license = with licenses; [ bsd2 bsd3 ]; license = with licenses; [ bsd2 bsd3 ];
maintainers = with maintainers; [ fridh ]; maintainers = with maintainers; [ fridh ];
};
}; };
} }
``` ```
@ -404,7 +400,7 @@ Indeed, we can just add any package we like to have in our environment to `propa
```nix ```nix
with import <nixpkgs> {}; with import <nixpkgs> {};
with pkgs.python35Packages; with python35Packages;
buildPythonPackage rec { buildPythonPackage rec {
name = "mypackage"; name = "mypackage";
@ -437,7 +433,7 @@ Let's split the package definition from the environment definition.
We first create a function that builds `toolz` in `~/path/to/toolz/release.nix` We first create a function that builds `toolz` in `~/path/to/toolz/release.nix`
```nix ```nix
{ lib, pkgs, buildPythonPackage }: { lib, buildPythonPackage }:
buildPythonPackage rec { buildPythonPackage rec {
pname = "toolz"; pname = "toolz";
@ -449,7 +445,7 @@ buildPythonPackage rec {
}; };
meta = with lib; { meta = with lib; {
homepage = "http://github.com/pytoolz/toolz/"; homepage = "https://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities"; description = "List processing tools and functional utilities";
license = licenses.bsd3; license = licenses.bsd3;
maintainers = with maintainers; [ fridh ]; maintainers = with maintainers; [ fridh ];
@ -457,18 +453,17 @@ buildPythonPackage rec {
} }
``` ```
It takes two arguments, `pkgs` and `buildPythonPackage`. It takes an argument `buildPythonPackage`.
We now call this function using `callPackage` in the definition of our environment We now call this function using `callPackage` in the definition of our environment
```nix ```nix
with import <nixpkgs> {}; with import <nixpkgs> {};
( let ( let
toolz = pkgs.callPackage /path/to/toolz/release.nix { toolz = callPackage /path/to/toolz/release.nix {
pkgs = pkgs; buildPythonPackage = python35Packages.buildPythonPackage;
buildPythonPackage = pkgs.python35Packages.buildPythonPackage;
}; };
in pkgs.python35.withPackages (ps: [ ps.numpy toolz ]) in python35.withPackages (ps: [ ps.numpy toolz ])
).env ).env
``` ```
@ -515,7 +510,7 @@ Each interpreter has the following attributes:
### Building packages and applications ### Building packages and applications
Python libraries and applications that use `setuptools` or Python libraries and applications that use `setuptools` or
`distutils` are typically build with respectively the `buildPythonPackage` and `distutils` are typically built with respectively the `buildPythonPackage` and
`buildPythonApplication` functions. These two functions also support installing a `wheel`. `buildPythonApplication` functions. These two functions also support installing a `wheel`.
All Python packages reside in `pkgs/top-level/python-packages.nix` and all All Python packages reside in `pkgs/top-level/python-packages.nix` and all
@ -566,7 +561,7 @@ buildPythonPackage rec {
''; '';
checkInputs = [ hypothesis ]; checkInputs = [ hypothesis ];
buildInputs = [ setuptools_scm ]; nativeBuildInputs = [ setuptools_scm ];
propagatedBuildInputs = [ attrs py setuptools six pluggy ]; propagatedBuildInputs = [ attrs py setuptools six pluggy ];
meta = with lib; { meta = with lib; {
@ -586,11 +581,6 @@ The `buildPythonPackage` mainly does four things:
environment variable and add dependent libraries to script's `sys.path`. environment variable and add dependent libraries to script's `sys.path`.
* In the `installCheck` phase, `${python.interpreter} setup.py test` is ran. * In the `installCheck` phase, `${python.interpreter} setup.py test` is ran.
As in Perl, dependencies on other Python packages can be specified in the
`buildInputs` and `propagatedBuildInputs` attributes. If something is
exclusively a build-time dependency, use `buildInputs`; if it is (also) a runtime
dependency, use `propagatedBuildInputs`.
By default tests are run because `doCheck = true`. Test dependencies, like By default tests are run because `doCheck = true`. Test dependencies, like
e.g. the test runner, should be added to `checkInputs`. e.g. the test runner, should be added to `checkInputs`.
@ -602,19 +592,28 @@ as the interpreter unless overridden otherwise.
All parameters from `stdenv.mkDerivation` function are still supported. The following are specific to `buildPythonPackage`: All parameters from `stdenv.mkDerivation` function are still supported. The following are specific to `buildPythonPackage`:
* `catchConflicts ? true`: If `true`, abort package build if a package name appears more than once in dependency tree. Default is `true`. * `catchConflicts ? true`: If `true`, abort package build if a package name appears more than once in dependency tree. Default is `true`.
* `checkInputs ? []`: Dependencies needed for running the `checkPhase`. These are added to `buildInputs` when `doCheck = true`.
* `disabled` ? false: If `true`, package is not build for the particular Python interpreter version. * `disabled` ? false: If `true`, package is not build for the particular Python interpreter version.
* `dontWrapPythonPrograms ? false`: Skip wrapping of python programs. * `dontWrapPythonPrograms ? false`: Skip wrapping of python programs.
* `installFlags ? []`: A list of strings. Arguments to be passed to `pip install`. To pass options to `python setup.py install`, use `--install-option`. E.g., `installFlags=["--install-option='--cpp_implementation'"]. * `permitUserSite ? false`: Skip setting the `PYTHONNOUSERSITE` environment variable in wrapped programs.
* `format ? "setuptools"`: Format of the source. Valid options are `"setuptools"`, `"flit"`, `"wheel"`, and `"other"`. `"setuptools"` is for when the source has a `setup.py` and `setuptools` is used to build a wheel, `flit`, in case `flit` should be used to build a wheel, and `wheel` in case a wheel is provided. Use `other` when a custom `buildPhase` and/or `installPhase` is needed. * `installFlags ? []`: A list of strings. Arguments to be passed to `pip install`. To pass options to `python setup.py install`, use `--install-option`. E.g., `installFlags=["--install-option='--cpp_implementation'"]`.
* `format ? "setuptools"`: Format of the source. Valid options are `"setuptools"`, `"pyproject"`, `"flit"`, `"wheel"`, and `"other"`. `"setuptools"` is for when the source has a `setup.py` and `setuptools` is used to build a wheel, `flit`, in case `flit` should be used to build a wheel, and `wheel` in case a wheel is provided. Use `other` when a custom `buildPhase` and/or `installPhase` is needed.
* `makeWrapperArgs ? []`: A list of strings. Arguments to be passed to `makeWrapper`, which wraps generated binaries. By default, the arguments to `makeWrapper` set `PATH` and `PYTHONPATH` environment variables before calling the binary. Additional arguments here can allow a developer to set environment variables which will be available when the binary is run. For example, `makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]`. * `makeWrapperArgs ? []`: A list of strings. Arguments to be passed to `makeWrapper`, which wraps generated binaries. By default, the arguments to `makeWrapper` set `PATH` and `PYTHONPATH` environment variables before calling the binary. Additional arguments here can allow a developer to set environment variables which will be available when the binary is run. For example, `makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]`.
* `namePrefix`: Prepends text to `${name}` parameter. In case of libraries, this defaults to `"python3.5-"` for Python 3.5, etc., and in case of applications to `""`. * `namePrefix`: Prepends text to `${name}` parameter. In case of libraries, this defaults to `"python3.5-"` for Python 3.5, etc., and in case of applications to `""`.
* `pythonPath ? []`: List of packages to be added into `$PYTHONPATH`. Packages in `pythonPath` are not propagated (contrary to `propagatedBuildInputs`). * `pythonPath ? []`: List of packages to be added into `$PYTHONPATH`. Packages in `pythonPath` are not propagated (contrary to `propagatedBuildInputs`).
* `preShellHook`: Hook to execute commands before `shellHook`. * `preShellHook`: Hook to execute commands before `shellHook`.
* `postShellHook`: Hook to execute commands after `shellHook`. * `postShellHook`: Hook to execute commands after `shellHook`.
* `removeBinByteCode ? true`: Remove bytecode from `/bin`. Bytecode is only created when the filenames end with `.py`. * `removeBinByteCode ? true`: Remove bytecode from `/bin`. Bytecode is only created when the filenames end with `.py`.
* `setupPyGlobalFlags ? []`: List of flags passed to `setup.py` command.
* `setupPyBuildFlags ? []`: List of flags passed to `setup.py build_ext` command. * `setupPyBuildFlags ? []`: List of flags passed to `setup.py build_ext` command.
The `stdenv.mkDerivation` function accepts various parameters for describing build inputs (see "Specifying dependencies"). The following are of special
interest for Python packages, either because these are primarily used, or because their behaviour is different:
* `nativeBuildInputs ? []`: Build-time only dependencies. Typically executables as well as the items listed in `setup_requires`.
* `buildInputs ? []`: Build and/or run-time dependencies that need to be be compiled for the host machine. Typically non-Python libraries which are being linked.
* `checkInputs ? []`: Dependencies needed for running the `checkPhase`. These are added to `nativeBuildInputs` when `doCheck = true`. Items listed in `tests_require` go here.
* `propagatedBuildInputs ? []`: Aside from propagating dependencies, `buildPythonPackage` also injects code into and wraps executables with the paths included in this list. Items listed in `install_requires` go here.
##### Overriding Python packages ##### Overriding Python packages
The `buildPythonPackage` function has a `overridePythonAttrs` method that The `buildPythonPackage` function has a `overridePythonAttrs` method that
@ -638,7 +637,7 @@ with import <nixpkgs> {};
}; };
}); });
}; };
in pkgs.python3.override {inherit packageOverrides;}; in pkgs.python3.override {inherit packageOverrides; self = python;};
in python.withPackages(ps: [ps.blaze])).env in python.withPackages(ps: [ps.blaze])).env
``` ```
@ -727,7 +726,7 @@ Saving the following as `default.nix`
with import <nixpkgs> {}; with import <nixpkgs> {};
python.buildEnv.override { python.buildEnv.override {
extraLibs = [ pkgs.pythonPackages.pyramid ]; extraLibs = [ pythonPackages.pyramid ];
ignoreCollisions = true; ignoreCollisions = true;
} }
``` ```
@ -759,6 +758,7 @@ specified packages in its path.
* `extraLibs`: List of packages installed inside the environment. * `extraLibs`: List of packages installed inside the environment.
* `postBuild`: Shell command executed after the build of environment. * `postBuild`: Shell command executed after the build of environment.
* `ignoreCollisions`: Ignore file collisions inside the environment (default is `false`). * `ignoreCollisions`: Ignore file collisions inside the environment (default is `false`).
* `permitUserSite`: Skip setting the `PYTHONNOUSERSITE` environment variable in wrapped binaries in the environment.
#### `python.withPackages` function #### `python.withPackages` function
@ -809,11 +809,12 @@ Given a `default.nix`:
```nix ```nix
with import <nixpkgs> {}; with import <nixpkgs> {};
buildPythonPackage { name = "myproject"; pythonPackages.buildPythonPackage {
name = "myproject";
buildInputs = with pythonPackages; [ pyramid ];
buildInputs = with pkgs.pythonPackages; [ pyramid ]; src = ./.;
}
src = ./.; }
``` ```
Running `nix-shell` with no arguments should give you Running `nix-shell` with no arguments should give you
@ -874,7 +875,6 @@ example of such a situation is when `py.test` is used.
''; '';
} }
``` ```
- Unicode issues can typically be fixed by including `glibcLocales` in `buildInputs` and exporting `LC_ALL=en_US.utf-8`.
- Tests that attempt to access `$HOME` can be fixed by using the following work-around before running tests (e.g. `preCheck`): `export HOME=$(mktemp -d)` - Tests that attempt to access `$HOME` can be fixed by using the following work-around before running tests (e.g. `preCheck`): `export HOME=$(mktemp -d)`
## FAQ ## FAQ
@ -1000,10 +1000,13 @@ Create this `default.nix` file, together with a `requirements.txt` and simply ex
```nix ```nix
with import <nixpkgs> {}; with import <nixpkgs> {};
with pkgs.python27Packages; with python27Packages;
stdenv.mkDerivation { stdenv.mkDerivation {
name = "impurePythonEnv"; name = "impurePythonEnv";
src = null;
buildInputs = [ buildInputs = [
# these packages are required for virtualenv and pip to work: # these packages are required for virtualenv and pip to work:
# #
@ -1023,14 +1026,15 @@ stdenv.mkDerivation {
libxslt libxslt
libzip libzip
stdenv stdenv
zlib ]; zlib
src = null; ];
shellHook = '' shellHook = ''
# set SOURCE_DATE_EPOCH so that we can use python wheels # set SOURCE_DATE_EPOCH so that we can use python wheels
SOURCE_DATE_EPOCH=$(date +%s) SOURCE_DATE_EPOCH=$(date +%s)
virtualenv --no-setuptools venv virtualenv --no-setuptools venv
export PATH=$PWD/venv/bin:$PATH export PATH=$PWD/venv/bin:$PATH
pip install -r requirements.txt pip install -r requirements.txt
''; '';
} }
``` ```
@ -1123,6 +1127,14 @@ LLVM implementation. To use that one instead, Intel recommends users set it with
Note that `mkl` is only available on `x86_64-{linux,darwin}` platforms; Note that `mkl` is only available on `x86_64-{linux,darwin}` platforms;
moreover, Hydra is not building and distributing pre-compiled binaries using it. moreover, Hydra is not building and distributing pre-compiled binaries using it.
### What inputs do `setup_requires`, `install_requires` and `tests_require` map to?
In a `setup.py` or `setup.cfg` it is common to declare dependencies:
* `setup_requires` corresponds to `nativeBuildInputs`
* `install_requires` corresponds to `propagatedBuildInputs`
* `tests_require` corresponds to `checkInputs`
## Contributing ## Contributing
### Contributing guidelines ### Contributing guidelines

View file

@ -4,71 +4,182 @@
<title>Qt</title> <title>Qt</title>
<para> <para>
Qt is a comprehensive desktop and mobile application development toolkit for This section describes the differences between Nix expressions for Qt
C++. Legacy support is available for Qt 3 and Qt 4, but all current libraries and applications and Nix expressions for other C++ software. Some
development uses Qt 5. The Qt 5 packages in Nixpkgs are updated frequently to knowledge of the latter is assumed. There are primarily two problems which
take advantage of new features, but older versions are typically retained the Qt infrastructure is designed to address: ensuring consistent versioning
until their support window ends. The most important consideration in of all dependencies and finding dependencies at runtime.
packaging Qt-based software is ensuring that each package and all its
dependencies use the same version of Qt 5; this consideration motivates most
of the tools described below.
</para> </para>
<section xml:id="ssec-qt-libraries"> <example xml:id='qt-default-nix'>
<title>Packaging Libraries for Nixpkgs</title> <title>Nix expression for a Qt package (<filename>default.nix</filename>)</title>
<programlisting>
{ mkDerivation, lib, qtbase }: <co xml:id='qt-default-nix-co-1' />
mkDerivation { <co xml:id='qt-default-nix-co-2' />
pname = "myapp";
version = "1.0";
buildInputs = [ qtbase ]; <co xml:id='qt-default-nix-co-3' />
}
</programlisting>
</example>
<calloutlist>
<callout arearefs='qt-default-nix-co-1'>
<para>
Import <literal>mkDerivation</literal> and Qt (such as
<literal>qtbase</literal> modules directly. <emphasis>Do not</emphasis>
import Qt package sets; the Qt versions of dependencies may not be
coherent, causing build and runtime failures.
</para>
</callout>
<callout arearefs='qt-default-nix-co-2'>
<para>
Use <literal>mkDerivation</literal> instead of
<literal>stdenv.mkDerivation</literal>. <literal>mkDerivation</literal>
is a wrapper around <literal>stdenv.mkDerivation</literal> which
applies some Qt-specific settings.
This deriver accepts the same arguments as
<literal>stdenv.mkDerivation</literal>; refer to
<xref linkend='chap-stdenv' /> for details.
</para>
<para>
To use another deriver instead of
<literal>stdenv.mkDerivation</literal>, use
<literal>mkDerivationWith</literal>:
<programlisting>
mkDerivationWith myDeriver {
# ...
}
</programlisting>
If you cannot use <literal>mkDerivationWith</literal>, please refer to
<xref linkend='qt-runtime-dependencies' />.
</para>
</callout>
<callout arearefs='qt-default-nix-co-3'>
<para>
<literal>mkDerivation</literal> accepts the same arguments as
<literal>stdenv.mkDerivation</literal>, such as
<literal>buildInputs</literal>.
</para>
</callout>
</calloutlist>
<formalpara xml:id='qt-runtime-dependencies'>
<title>Locating runtime dependencies</title>
<para>
Qt applications need to be wrapped to find runtime dependencies. If you
cannot use <literal>mkDerivation</literal> or
<literal>mkDerivationWith</literal> above, include
<literal>wrapQtAppsHook</literal> in <literal>nativeBuildInputs</literal>:
<programlisting>
stdenv.mkDerivation {
# ...
nativeBuildInputs = [ wrapQtAppsHook ];
}
</programlisting>
</para>
</formalpara>
<para>
Entries added to <literal>qtWrapperArgs</literal> are used to modify the
wrappers created by <literal>wrapQtAppsHook</literal>. The entries are
passed as arguments to <xref linkend='fun-wrapProgram' />.
<programlisting>
mkDerivation {
# ...
qtWrapperArgs = [ ''--prefix PATH : /path/to/bin'' ];
}
</programlisting>
</para>
<para>
Set <literal>dontWrapQtApps</literal> to stop applications from being
wrapped automatically. It is required to wrap applications manually with
<literal>wrapQtApp</literal>, using the syntax of
<xref linkend='fun-wrapProgram' />:
<programlisting>
mkDerivation {
# ...
dontWrapQtApps = true;
preFixup = ''
wrapQtApp "$out/bin/myapp" --prefix PATH : /path/to/bin
'';
}
</programlisting>
</para>
<note>
<para> <para>
Whenever possible, libraries that use Qt 5 should be built with each <literal>wrapQtAppsHook</literal> ignores files that are non-ELF executables.
available version. Packages providing libraries should be added to the This means that scripts won't be automatically wrapped so you'll need to manually
top-level function <varname>mkLibsForQt5</varname>, which is used to build a wrap them as previously mentioned. An example of when you'd always need to do this
set of libraries for every Qt 5 version. A special is with Python applications that use PyQT.
<varname>callPackage</varname> function is used in this scope to ensure that
the entire dependency tree uses the same Qt 5 version. Import dependencies
unqualified, i.e., <literal>qtbase</literal> not
<literal>qt5.qtbase</literal>. <emphasis>Do not</emphasis> import a package
set such as <literal>qt5</literal> or <literal>libsForQt5</literal>.
</para> </para>
</note>
<para> <para>
If a library does not support a particular version of Qt 5, it is best to Libraries are built with every available version of Qt. Use the <literal>meta.broken</literal>
mark it as broken by setting its <literal>meta.broken</literal> attribute. A attribute to disable the package for unsupported Qt versions:
package may be marked broken for certain versions by testing the <programlisting>
<literal>qtbase.version</literal> attribute, which will always give the mkDerivation {
current Qt 5 version. # ...
</para>
</section>
<section xml:id="ssec-qt-applications"> # Disable this library with Qt &lt; 5.9.0
<title>Packaging Applications for Nixpkgs</title> meta.broken = builtins.compareVersions qtbase.version "5.9.0" &lt; 0;
}
</programlisting>
</para>
<para> <formalpara>
Call your application expression using <title>Adding a library to Nixpkgs</title>
<literal>libsForQt5.callPackage</literal> instead of <para>
<literal>callPackage</literal>. Import dependencies unqualified, i.e., Add a Qt library to <filename>all-packages.nix</filename> by adding it to the
<literal>qtbase</literal> not <literal>qt5.qtbase</literal>. <emphasis>Do collection inside <literal>mkLibsForQt5</literal>. This ensures that the
not</emphasis> import a package set such as <literal>qt5</literal> or library is built with every available version of Qt as needed.
<literal>libsForQt5</literal>. <example xml:id='qt-library-all-packages-nix'>
</para> <title>Adding a Qt library to <filename>all-packages.nix</filename></title>
<programlisting>
{
# ...
<para> mkLibsForQt5 = self: with self; {
Qt 5 maintains strict backward compatibility, so it is generally best to # ...
build an application package against the latest version using the
<varname>libsForQt5</varname> library set. In case a package does not build mylib = callPackage ../path/to/mylib {};
with the latest Qt version, it is possible to pick a set pinned to a };
particular version, e.g. <varname>libsForQt55</varname> for Qt 5.5, if that
is the latest version the package supports. If a package must be pinned to # ...
an older Qt version, be sure to file a bug upstream; because Qt is strictly }
backwards-compatible, any incompatibility is by definition a bug in the </programlisting>
application. </example>
</para> </para>
</formalpara>
<formalpara>
<title>Adding an application to Nixpkgs</title>
<para>
Add a Qt application to <filename>all-packages.nix</filename> using
<literal>libsForQt5.callPackage</literal> instead of the usual
<literal>callPackage</literal>. The former ensures that all dependencies
are built with the same version of Qt.
<example xml:id='qt-application-all-packages-nix'>
<title>Adding a Qt application to <filename>all-packages.nix</filename></title>
<programlisting>
{
# ...
myapp = libsForQt5.callPackage ../path/to/myapp/ {};
# ...
}
</programlisting>
</example>
</para>
</formalpara>
<para>
When testing applications in Nixpkgs, it is a common practice to build the
package with <literal>nix-build</literal> and run it using the created
symbolic link. This will not work with Qt applications, however, because
they have many hard runtime requirements that can only be guaranteed if the
package is actually installed. To test a Qt application, install it with
<literal>nix-env</literal> or run it inside <literal>nix-shell</literal>.
</para>
</section>
</section> </section>

View file

@ -336,9 +336,9 @@ with import <nixpkgs> {};
let src = fetchFromGitHub { let src = fetchFromGitHub {
owner = "mozilla"; owner = "mozilla";
repo = "nixpkgs-mozilla"; repo = "nixpkgs-mozilla";
# commit from: 2018-03-27 # commit from: 2019-05-15
rev = "2945b0b6b2fd19e7d23bac695afd65e320efcebe"; rev = "9f35c4b09fd44a77227e79ff0c1b4b6a69dff533";
sha256 = "034m1dryrzh2lmjvk3c0krgip652dql46w5yfwpvh7gavd3iypyw"; sha256 = "18h0nvh55b5an4gmlgfbvwbyqj91bklf1zymis6lbdh75571qaz0";
}; };
in in
with import "${src.out}/rust-overlay.nix" pkgs pkgs; with import "${src.out}/rust-overlay.nix" pkgs pkgs;

View file

@ -26,7 +26,7 @@
texlive.combine { texlive.combine {
inherit (texlive) scheme-small collection-langkorean algorithms cm-super; inherit (texlive) scheme-small collection-langkorean algorithms cm-super;
} }
</programlisting> </programlisting>
There are all the schemes, collections and a few thousand packages, as There are all the schemes, collections and a few thousand packages, as
defined upstream (perhaps with tiny differences). defined upstream (perhaps with tiny differences).
</para> </para>
@ -44,7 +44,7 @@ texlive.combine {
# elem tlType [ "run" "bin" "doc" "source" ] # elem tlType [ "run" "bin" "doc" "source" ]
# there are also other attributes: version, name # there are also other attributes: version, name
} }
</programlisting> </programlisting>
</para> </para>
</listitem> </listitem>
<listitem> <listitem>

View file

@ -21,7 +21,7 @@ At the moment we support three different methods for managing plugins:
Adding custom .vimrc lines can be done using the following code: Adding custom .vimrc lines can be done using the following code:
``` ```nix
vim_configurable.customize { vim_configurable.customize {
# `name` specifies the name of the executable and package # `name` specifies the name of the executable and package
name = "vim-with-plugins"; name = "vim-with-plugins";
@ -32,11 +32,11 @@ vim_configurable.customize {
} }
``` ```
This configuration is used when vim is invoked with the command specified as name, in this case `vim-with-plugins`. This configuration is used when Vim is invoked with the command specified as name, in this case `vim-with-plugins`.
For Neovim the `configure` argument can be overridden to achieve the same: For Neovim the `configure` argument can be overridden to achieve the same:
``` ```nix
neovim.override { neovim.override {
configure = { configure = {
customRC = '' customRC = ''
@ -46,10 +46,10 @@ neovim.override {
} }
``` ```
If you want to use `neovim-qt` as a graphical editor, you can configure it by overriding neovim in an overlay If you want to use `neovim-qt` as a graphical editor, you can configure it by overriding Neovim in an overlay
or passing it an overridden neovimn: or passing it an overridden Neovimn:
``` ```nix
neovim-qt.override { neovim-qt.override {
neovim = neovim.override { neovim = neovim.override {
configure = { configure = {
@ -63,16 +63,16 @@ neovim-qt.override {
## Managing plugins with Vim packages ## Managing plugins with Vim packages
To store you plugins in Vim packages (the native vim plugin manager, see `:help packages`) the following example can be used: To store you plugins in Vim packages (the native Vim plugin manager, see `:help packages`) the following example can be used:
``` ```nix
vim_configurable.customize { vim_configurable.customize {
vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; { vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
# loaded on launch # loaded on launch
start = [ youcompleteme fugitive ]; start = [ youcompleteme fugitive ];
# manually loadable by calling `:packadd $plugin-name` # manually loadable by calling `:packadd $plugin-name`
# however, if a vim plugin has a dependency that is not explicitly listed in # however, if a Vim plugin has a dependency that is not explicitly listed in
# opt that dependency will always be added to start to avoid confusion. # opt that dependency will always be added to start to avoid confusion.
opt = [ phpCompletion elm-vim ]; opt = [ phpCompletion elm-vim ];
# To automatically load a plugin when opening a filetype, add vimrc lines like: # To automatically load a plugin when opening a filetype, add vimrc lines like:
# autocmd FileType php :packadd phpCompletion # autocmd FileType php :packadd phpCompletion
@ -83,7 +83,7 @@ vim_configurable.customize {
`myVimPackage` is an arbitrary name for the generated package. You can choose any name you like. `myVimPackage` is an arbitrary name for the generated package. You can choose any name you like.
For Neovim the syntax is: For Neovim the syntax is:
``` ```nix
neovim.override { neovim.override {
configure = { configure = {
customRC = '' customRC = ''
@ -92,7 +92,7 @@ neovim.override {
packages.myVimPackage = with pkgs.vimPlugins; { packages.myVimPackage = with pkgs.vimPlugins; {
# see examples below how to use custom packages # see examples below how to use custom packages
start = [ ]; start = [ ];
# If a vim plugin has a dependency that is not explicitly listed in # If a Vim plugin has a dependency that is not explicitly listed in
# opt that dependency will always be added to start to avoid confusion. # opt that dependency will always be added to start to avoid confusion.
opt = [ ]; opt = [ ];
}; };
@ -102,7 +102,7 @@ neovim.override {
The resulting package can be added to `packageOverrides` in `~/.nixpkgs/config.nix` to make it installable: The resulting package can be added to `packageOverrides` in `~/.nixpkgs/config.nix` to make it installable:
``` ```nix
{ {
packageOverrides = pkgs: with pkgs; { packageOverrides = pkgs: with pkgs; {
myVim = vim_configurable.customize { myVim = vim_configurable.customize {
@ -126,7 +126,7 @@ After that you can install your special grafted `myVim` or `myNeovim` packages.
To use [vim-plug](https://github.com/junegunn/vim-plug) to manage your Vim To use [vim-plug](https://github.com/junegunn/vim-plug) to manage your Vim
plugins the following example can be used: plugins the following example can be used:
``` ```nix
vim_configurable.customize { vim_configurable.customize {
vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; { vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
# loaded on launch # loaded on launch
@ -137,7 +137,7 @@ vim_configurable.customize {
For Neovim the syntax is: For Neovim the syntax is:
``` ```nix
neovim.override { neovim.override {
configure = { configure = {
customRC = '' customRC = ''
@ -161,95 +161,117 @@ assuming that "using latest version" is ok most of the time.
First create a vim-scripts file having one plugin name per line. Example: First create a vim-scripts file having one plugin name per line. Example:
"tlib" ```
{'name': 'vim-addon-sql'} "tlib"
{'filetype_regex': '\%(vim)$', 'names': ['reload', 'vim-dev-plugin']} {'name': 'vim-addon-sql'}
{'filetype_regex': '\%(vim)$', 'names': ['reload', 'vim-dev-plugin']}
```
Such vim-scripts file can be read by VAM as well like this: Such vim-scripts file can be read by VAM as well like this:
call vam#Scripts(expand('~/.vim-scripts'), {}) ```vim
call vam#Scripts(expand('~/.vim-scripts'), {})
```
Create a default.nix file: Create a default.nix file:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }: ```nix
nixpkgs.vim_configurable.customize { name = "vim"; vimrcConfig.vam.pluginDictionaries = [ "vim-addon-vim2nix" ]; } { nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
nixpkgs.vim_configurable.customize { name = "vim"; vimrcConfig.vam.pluginDictionaries = [ "vim-addon-vim2nix" ]; }
```
Create a generate.vim file: Create a generate.vim file:
ActivateAddons vim-addon-vim2nix ```vim
let vim_scripts = "vim-scripts" ActivateAddons vim-addon-vim2nix
call nix#ExportPluginsForNix({ let vim_scripts = "vim-scripts"
\ 'path_to_nixpkgs': eval('{"'.substitute(substitute(substitute($NIX_PATH, ':', ',', 'g'), '=',':', 'g'), '\([:,]\)', '"\1"',"g").'"}')["nixpkgs"], call nix#ExportPluginsForNix({
\ 'cache_file': '/tmp/vim2nix-cache', \ 'path_to_nixpkgs': eval('{"'.substitute(substitute(substitute($NIX_PATH, ':', ',', 'g'), '=',':', 'g'), '\([:,]\)', '"\1"',"g").'"}')["nixpkgs"],
\ 'try_catch': 0, \ 'cache_file': '/tmp/vim2nix-cache',
\ 'plugin_dictionaries': ["vim-addon-manager"]+map(readfile(vim_scripts), 'eval(v:val)') \ 'try_catch': 0,
\ }) \ 'plugin_dictionaries': ["vim-addon-manager"]+map(readfile(vim_scripts), 'eval(v:val)')
\ })
```
Then run Then run
nix-shell -p vimUtils.vim_with_vim2nix --command "vim -c 'source generate.vim'" ```bash
nix-shell -p vimUtils.vim_with_vim2nix --command "vim -c 'source generate.vim'"
```
You should get a Vim buffer with the nix derivations (output1) and vam.pluginDictionaries (output2). You should get a Vim buffer with the nix derivations (output1) and vam.pluginDictionaries (output2).
You can add your vim to your system's configuration file like this and start it by "vim-my": You can add your Vim to your system's configuration file like this and start it by "vim-my":
my-vim = ```
let plugins = let inherit (vimUtils) buildVimPluginFrom2Nix; in { my-vim =
copy paste output1 here let plugins = let inherit (vimUtils) buildVimPluginFrom2Nix; in {
}; in vim_configurable.customize { copy paste output1 here
name = "vim-my"; }; in vim_configurable.customize {
name = "vim-my";
vimrcConfig.vam.knownPlugins = plugins; # optional vimrcConfig.vam.knownPlugins = plugins; # optional
vimrcConfig.vam.pluginDictionaries = [ vimrcConfig.vam.pluginDictionaries = [
copy paste output2 here copy paste output2 here
]; ];
# Pathogen would be
# vimrcConfig.pathogen.knownPlugins = plugins; # plugins
# vimrcConfig.pathogen.pluginNames = ["tlib"];
};
# Pathogen would be
# vimrcConfig.pathogen.knownPlugins = plugins; # plugins
# vimrcConfig.pathogen.pluginNames = ["tlib"];
};
```
Sample output1: Sample output1:
"reload" = buildVimPluginFrom2Nix { # created by nix#NixDerivation ```
name = "reload"; "reload" = buildVimPluginFrom2Nix { # created by nix#NixDerivation
src = fetchgit { name = "reload";
url = "git://github.com/xolox/vim-reload"; src = fetchgit {
rev = "0a601a668727f5b675cb1ddc19f6861f3f7ab9e1"; url = "git://github.com/xolox/vim-reload";
sha256 = "0vb832l9yxj919f5hfg6qj6bn9ni57gnjd3bj7zpq7d4iv2s4wdh"; rev = "0a601a668727f5b675cb1ddc19f6861f3f7ab9e1";
}; sha256 = "0vb832l9yxj919f5hfg6qj6bn9ni57gnjd3bj7zpq7d4iv2s4wdh";
dependencies = ["nim-misc"]; };
dependencies = ["nim-misc"];
}; };
[...] [...]
```
Sample output2: Sample output2:
[ ```nix
''vim-addon-manager'' [
''tlib'' ''vim-addon-manager''
{ "name" = ''vim-addon-sql''; } ''tlib''
{ "filetype_regex" = ''\%(vim)$$''; "names" = [ ''reload'' ''vim-dev-plugin'' ]; } { "name" = ''vim-addon-sql''; }
] { "filetype_regex" = ''\%(vim)$$''; "names" = [ ''reload'' ''vim-dev-plugin'' ]; }
]
```
## Adding new plugins to nixpkgs ## Adding new plugins to nixpkgs
In `pkgs/misc/vim-plugins/vim-plugin-names` we store the plugin names Nix expressions for Vim plugins are stored in [pkgs/misc/vim-plugins](/pkgs/misc/vim-plugins). For the vast majority of plugins, Nix expressions are automatically generated by running [`./update.py`](/pkgs/misc/vim-plugins/update.py). This creates a [generated.nix](/pkgs/misc/vim-plugins/generated.nix) file based on the plugins listed in [vim-plugin-names](/pkgs/misc/vim-plugins/vim-plugin-names). Plugins are listed in alphabetical order in `vim-plugin-names` using the format `[github username]/[repository]`. For example https://github.com/scrooloose/nerdtree becomes `scrooloose/nerdtree`.
for all vim plugins we automatically generate plugins for.
The format of this file `github username/github repository`: Some plugins require overrides in order to function properly. Overrides are placed in [overrides.nix](/pkgs/misc/vim-plugins/overrides.nix). Overrides are most often required when a plugin requires some dependencies, or extra steps are required during the build process. For example `deoplete-fish` requires both `deoplete-nvim` and `vim-fish`, and so the following override was added:
For example https://github.com/scrooloose/nerdtree becomes `scrooloose/nerdtree`.
After adding your plugin to this file run the `./update.py` in the same folder. ```
This will updated a file called `generated.nix` and make your plugin accessible in the deoplete-fish = super.deoplete-fish.overrideAttrs(old: {
`vimPlugins` attribute set (`vimPlugins.nerdtree` in our example). dependencies = with super; [ deoplete-nvim vim-fish ];
If additional steps to the build process of the plugin are required, add an });
override to the `pkgs/misc/vim-plugins/default.nix` in the same directory. ```
Sometimes plugins require an override that must be changed when the plugin is updated. This can cause issues when Vim plugins are auto-updated but the associated override isn't updated. For these plugins, the override should be written so that it specifies all information required to install the plugin, and running `./update.py` doesn't change the derivation for the plugin. Manually updating the override is required to update these types of plugins. An example of such a plugin is `LanguageClient-neovim`.
To add a new plugin:
1. run `./update.py` and create a commit named "vimPlugins: Update",
2. add the new plugin to [vim-plugin-names](/pkgs/misc/vim-plugins/vim-plugin-names) and add overrides if required to [overrides.nix](/pkgs/misc/vim-plugins/overrides.nix),
3. run `./update.py` again and create a commit named "vimPlugins.[name]: init at [version]" (where `name` and `version` can be found in [generated.nix](/pkgs/misc/vim-plugins/generated.nix)), and
4. create a pull request.
## Important repositories ## Important repositories
- [vim-pi](https://bitbucket.org/vimcommunity/vim-pi) is a plugin repository - [vim-pi](https://bitbucket.org/vimcommunity/vim-pi) is a plugin repository
from VAM plugin manager meant to be used by others as well used by from VAM plugin manager meant to be used by others as well used by
- [vim2nix](http://github.com/MarcWeber/vim-addon-vim2nix) which generates the - [vim2nix](https://github.com/MarcWeber/vim-addon-vim2nix) which generates the
.nix code .nix code

View file

@ -1,12 +1,13 @@
<book xmlns="http://docbook.org/ns/docbook" <book xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"> xmlns:xi="http://www.w3.org/2001/XInclude">
<info> <info>
<title>Nixpkgs Contributors Guide</title> <title>Nixpkgs Users and Contributors Guide</title>
<subtitle>Version <xi:include href=".version" parse="text" /> <subtitle>Version <xi:include href=".version" parse="text" />
</subtitle> </subtitle>
</info> </info>
<xi:include href="introduction.chapter.xml" /> <xi:include href="introduction.chapter.xml" />
<xi:include href="quick-start.xml" /> <xi:include href="quick-start.xml" />
<xi:include href="package-specific-user-notes.xml" />
<xi:include href="stdenv.xml" /> <xi:include href="stdenv.xml" />
<xi:include href="multiple-output.xml" /> <xi:include href="multiple-output.xml" />
<xi:include href="cross-compilation.xml" /> <xi:include href="cross-compilation.xml" />

View file

@ -30,7 +30,7 @@ meta = with stdenv.lib; {
The meta-attributes of a package can be queried from the command-line using The meta-attributes of a package can be queried from the command-line using
<command>nix-env</command>: <command>nix-env</command>:
<screen> <screen>
$ nix-env -qa hello --json <prompt>$ </prompt>nix-env -qa hello --json
{ {
"hello": { "hello": {
"meta": { "meta": {
@ -70,7 +70,7 @@ $ nix-env -qa hello --json
<command>nix-env</command> knows about the <varname>description</varname> <command>nix-env</command> knows about the <varname>description</varname>
field specifically: field specifically:
<screen> <screen>
$ nix-env -qa hello --description <prompt>$ </prompt>nix-env -qa hello --description
hello-2.3 A program that produces a familiar, friendly greeting hello-2.3 A program that produces a familiar, friendly greeting
</screen> </screen>
</para> </para>
@ -150,6 +150,19 @@ hello-2.3 A program that produces a familiar, friendly greeting
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term>
<varname>changelog</varname>
</term>
<listitem>
<para>
A link or a list of links to the location of Changelog for a package.
A link may use expansion to refer to the correct changelog version.
Example:
<literal>"https://git.savannah.gnu.org/cgit/hello.git/plain/NEWS?h=v${version}"</literal>
</para>
</listitem>
</varlistentry>
<varlistentry> <varlistentry>
<term> <term>
<varname>license</varname> <varname>license</varname>
@ -259,11 +272,9 @@ meta.platforms = stdenv.lib.platforms.linux;
<para> <para>
This attribute is special in that it is not actually under the This attribute is special in that it is not actually under the
<literal>meta</literal> attribute set but rather under the <literal>meta</literal> attribute set but rather under the
<literal>passthru</literal> attribute set. This is due to a current <literal>passthru</literal> attribute set. This is due to how
limitation of Nix, and will change as soon as Nixpkgs will be able to <literal>meta</literal> attributes work, and the fact that they
depend on a new enough version of Nix. See are supposed to contain only metadata, not derivations.
<link xlink:href="https://github.com/NixOS/nix/issues/2532">the relevant
issue</link> for more details.
</para> </para>
</warning> </warning>
<para> <para>

View file

@ -101,6 +101,13 @@
contain <varname>$outputBin</varname> and <varname>$outputLib</varname> are contain <varname>$outputBin</varname> and <varname>$outputLib</varname> are
also added. (See <xref linkend="multiple-output-file-type-groups" />.) also added. (See <xref linkend="multiple-output-file-type-groups" />.)
</para> </para>
<para>
In some cases it may be desirable to combine different outputs under a
single store path. A function <literal>symlinkJoin</literal> can be used to
do this. (Note that it may negate some closure size benefits of using a
multiple-output package.)
</para>
</section> </section>
<section xml:id="sec-multiple-outputs-"> <section xml:id="sec-multiple-outputs-">
<title>Writing a split derivation</title> <title>Writing a split derivation</title>

View file

@ -92,9 +92,9 @@ modulesTree = [kernel]
<para> <para>
If needed you can also run <literal>make menuconfig</literal>: If needed you can also run <literal>make menuconfig</literal>:
<screen> <screen>
$ nix-env -i ncurses <prompt>$ </prompt>nix-env -i ncurses
$ export NIX_CFLAGS_LINK=-lncurses <prompt>$ </prompt>export NIX_CFLAGS_LINK=-lncurses
$ make menuconfig ARCH=<replaceable>arch</replaceable></screen> <prompt>$ </prompt>make menuconfig ARCH=<replaceable>arch</replaceable></screen>
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -142,8 +142,8 @@ $ make menuconfig ARCH=<replaceable>arch</replaceable></screen>
<para> <para>
The generator is invoked as follows: The generator is invoked as follows:
<screen> <screen>
$ cd pkgs/servers/x11/xorg <prompt>$ </prompt>cd pkgs/servers/x11/xorg
$ cat tarballs-7.5.list extra.list old.list \ <prompt>$ </prompt>cat tarballs-7.5.list extra.list old.list \
| perl ./generate-expr-from-tarballs.pl | perl ./generate-expr-from-tarballs.pl
</screen> </screen>
For each of the tarballs in the <filename>.list</filename> files, the script For each of the tarballs in the <filename>.list</filename> files, the script
@ -160,8 +160,8 @@ $ cat tarballs-7.5.list extra.list old.list \
A file like <filename>tarballs-7.5.list</filename> contains all tarballs in A file like <filename>tarballs-7.5.list</filename> contains all tarballs in
a X.org release. It can be generated like this: a X.org release. It can be generated like this:
<screen> <screen>
$ export i="mirror://xorg/X11R7.4/src/everything/" <prompt>$ </prompt>export i="mirror://xorg/X11R7.4/src/everything/"
$ cat $(PRINT_PATH=1 nix-prefetch-url $i | tail -n 1) \ <prompt>$ </prompt>cat $(PRINT_PATH=1 nix-prefetch-url $i | tail -n 1) \
| perl -e 'while (&lt;>) { if (/(href|HREF)="([^"]*.bz2)"/) { print "$ENV{'i'}$2\n"; }; }' \ | perl -e 'while (&lt;>) { if (/(href|HREF)="([^"]*.bz2)"/) { print "$ENV{'i'}$2\n"; }; }' \
| sort > tarballs-7.4.list | sort > tarballs-7.4.list
</screen> </screen>
@ -210,7 +210,7 @@ $ cat $(PRINT_PATH=1 nix-prefetch-url $i | tail -n 1) \
often available. It is possible to list available Eclipse packages by often available. It is possible to list available Eclipse packages by
issuing the command: issuing the command:
<screen> <screen>
$ nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses --description <prompt>$ </prompt>nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses --description
</screen> </screen>
Once an Eclipse variant is installed it can be run using the Once an Eclipse variant is installed it can be run using the
<command>eclipse</command> command, as expected. From within Eclipse it is <command>eclipse</command> command, as expected. From within Eclipse it is
@ -250,7 +250,7 @@ packageOverrides = pkgs: {
available for installation using <varname>eclipseWithPlugins</varname> by available for installation using <varname>eclipseWithPlugins</varname> by
running running
<screen> <screen>
$ nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses.plugins --description <prompt>$ </prompt>nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses.plugins --description
</screen> </screen>
</para> </para>
@ -307,19 +307,36 @@ packageOverrides = pkgs: {
</screen> </screen>
</para> </para>
</section> </section>
<section xml:id="sec-elm"> <section xml:id="sec-elm">
<title>Elm</title> <title>Elm</title>
<para> <para>
To update Elm compiler, see <filename>nixpkgs/pkgs/development/compilers/elm/README.md</filename>. To start a development environment do <command>nix-shell -p elmPackages.elm elmPackages.elm-format</command>
</para> </para>
<para> <para>
To package Elm applications, <link xlink:href="https://github.com/hercules-ci/elm2nix#elm2nix">read about elm2nix</link>. To update Elm compiler, see
<filename>nixpkgs/pkgs/development/compilers/elm/README.md</filename>.
</para>
<para>
To package Elm applications,
<link xlink:href="https://github.com/hercules-ci/elm2nix#elm2nix">read about
elm2nix</link>.
</para> </para>
</section> </section>
<section xml:id="sec-kakoune">
<title>Kakoune</title>
<para>
Kakoune can be built to autoload plugins:
<programlisting>(kakoune.override {
configure = {
plugins = with pkgs.kakounePlugins; [ parinfer-rust ];
};
})</programlisting>
</para>
</section>
<section xml:id="sec-shell-helpers"> <section xml:id="sec-shell-helpers">
<title>Interactive shell helpers</title> <title>Interactive shell helpers</title>
@ -347,312 +364,6 @@ packageOverrides = pkgs: {
</screen> </screen>
</para> </para>
</section> </section>
<section xml:id="sec-steam">
<title>Steam</title>
<section xml:id="sec-steam-nix">
<title>Steam in Nix</title>
<para>
Steam is distributed as a <filename>.deb</filename> file, for now only as
an i686 package (the amd64 package only has documentation). When unpacked,
it has a script called <filename>steam</filename> that in ubuntu (their
target distro) would go to <filename>/usr/bin </filename>. When run for the
first time, this script copies some files to the user's home, which include
another script that is the ultimate responsible for launching the steam
binary, which is also in $HOME.
</para>
<para>
Nix problems and constraints:
<itemizedlist>
<listitem>
<para>
We don't have <filename>/bin/bash</filename> and many scripts point
there. Similarly for <filename>/usr/bin/python</filename> .
</para>
</listitem>
<listitem>
<para>
We don't have the dynamic loader in <filename>/lib </filename>.
</para>
</listitem>
<listitem>
<para>
The <filename>steam.sh</filename> script in $HOME can not be patched, as
it is checked and rewritten by steam.
</para>
</listitem>
<listitem>
<para>
The steam binary cannot be patched, it's also checked.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The current approach to deploy Steam in NixOS is composing a FHS-compatible
chroot environment, as documented
<link xlink:href="http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html">here</link>.
This allows us to have binaries in the expected paths without disrupting
the system, and to avoid patching them to work in a non FHS environment.
</para>
</section>
<section xml:id="sec-steam-play">
<title>How to play</title>
<para>
For 64-bit systems it's important to have
<programlisting>hardware.opengl.driSupport32Bit = true;</programlisting>
in your <filename>/etc/nixos/configuration.nix</filename>. You'll also need
<programlisting>hardware.pulseaudio.support32Bit = true;</programlisting>
if you are using PulseAudio - this will enable 32bit ALSA apps integration.
To use the Steam controller or other Steam supported controllers such as
the DualShock 4 or Nintendo Switch Pro, you need to add
<programlisting>hardware.steam-hardware.enable = true;</programlisting>
to your configuration.
</para>
</section>
<section xml:id="sec-steam-troub">
<title>Troubleshooting</title>
<para>
<variablelist>
<varlistentry>
<term>
Steam fails to start. What do I do?
</term>
<listitem>
<para>
Try to run
<programlisting>strace steam</programlisting>
to see what is causing steam to fail.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Using the FOSS Radeon or nouveau (nvidia) drivers
</term>
<listitem>
<itemizedlist>
<listitem>
<para>
The <literal>newStdcpp</literal> parameter was removed since NixOS
17.09 and should not be needed anymore.
</para>
</listitem>
<listitem>
<para>
Steam ships statically linked with a version of libcrypto that
conflics with the one dynamically loaded by radeonsi_dri.so. If you
get the error
<programlisting>steam.sh: line 713: 7842 Segmentation fault (core dumped)</programlisting>
have a look at
<link xlink:href="https://github.com/NixOS/nixpkgs/pull/20269">this
pull request</link>.
</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry>
<term>
Java
</term>
<listitem>
<orderedlist>
<listitem>
<para>
There is no java in steam chrootenv by default. If you get a message
like
<programlisting>/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found</programlisting>
You need to add
<programlisting> steam.override { withJava = true; };</programlisting>
to your configuration.
</para>
</listitem>
</orderedlist>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
<section xml:id="sec-steam-run">
<title>steam-run</title>
<para>
The FHS-compatible chroot used for steam can also be used to run other
linux games that expect a FHS environment. To do it, add
<programlisting>pkgs.(steam.override {
nativeOnly = true;
newStdcpp = true;
}).run</programlisting>
to your configuration, rebuild, and run the game with
<programlisting>steam-run ./foo</programlisting>
</para>
</section>
</section>
<section xml:id="sec-emacs">
<title>Emacs</title>
<section xml:id="sec-emacs-config">
<title>Configuring Emacs</title>
<para>
The Emacs package comes with some extra helpers to make it easier to
configure. <varname>emacsWithPackages</varname> allows you to manage
packages from ELPA. This means that you will not have to install that
packages from within Emacs. For instance, if you wanted to use
<literal>company</literal>, <literal>counsel</literal>,
<literal>flycheck</literal>, <literal>ivy</literal>,
<literal>magit</literal>, <literal>projectile</literal>, and
<literal>use-package</literal> you could use this as a
<filename>~/.config/nixpkgs/config.nix</filename> override:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}
</screen>
<para>
You can install it like any other packages via <command>nix-env -iA
myEmacs</command>. However, this will only install those packages. It will
not <literal>configure</literal> them for us. To do this, we need to
provide a configuration file. Luckily, it is possible to do this from
within Nix! By modifying the above example, we can make Emacs load a custom
config file. The key is to create a package that provide a
<filename>default.el</filename> file in
<filename>/share/emacs/site-start/</filename>. Emacs knows to load this
file automatically when it starts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))
;; load some packages
(use-package company
:bind ("&lt;C-tab&gt;" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))
(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))
(use-package flycheck
:defer 2
:config (global-flycheck-mode))
(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))
(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init
(setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
};
}
</screen>
<para>
This provides a fairly full Emacs start file. It will load in addition to
the user's presonal config. You can always disable it by passing
<command>-q</command> to the Emacs command.
</para>
<para>
Sometimes <varname>emacsWithPackages</varname> is not enough, as this
package set has some priorities imposed on packages (with the lowest
priority assigned to Melpa Unstable, and the highest for packages manually
defined in <filename>pkgs/top-level/emacs-packages.nix</filename>). But you
can't control this priorities when some package is installed as a
dependency. You can override it on per-package-basis, providing all the
required dependencies manually - but it's tedious and there is always a
possibility that an unwanted dependency will sneak in through some other
package. To completely override such a package you can use
<varname>overrideScope'</varname>.
</para>
<screen>
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesNgGen emacs).overrideScope' overrides).emacsWithPackages (p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod
dante
])
</screen>
</section>
</section>
<section xml:id="sec-weechat"> <section xml:id="sec-weechat">
<title>Weechat</title> <title>Weechat</title>
@ -757,64 +468,6 @@ stdenv.mkDerivation {
}</programlisting> }</programlisting>
</para> </para>
</section> </section>
<section xml:id="sec-citrix">
<title>Citrix Receiver</title>
<para>
The <link xlink:href="https://www.citrix.com/products/receiver/">Citrix
Receiver</link> is a remote desktop viewer which provides access to
<link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link>
installations.
</para>
<section xml:id="sec-citrix-base">
<title>Basic usage</title>
<para>
The tarball archive needs to be downloaded manually as the licenses
agreements of the vendor need to be accepted first. This is available at
the
<link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">download
page at citrix.com</link>. Then run <literal>nix-prefetch-url
file://$PWD/linuxx64-$version.tar.gz</literal>. With the archive available
in the store the package can be built and installed with Nix.
</para>
<para>
<emphasis>Note: it's recommended to install <literal>Citrix
Receiver</literal> using <literal>nix-env -i</literal> or globally to
ensure that the <literal>.desktop</literal> files are installed properly
into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't be possible to
open <literal>.ica</literal> files automatically from the browser to start
a Citrix connection.</emphasis>
</para>
</section>
<section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title>
<para>
The <literal>Citrix Receiver</literal> in <literal>nixpkgs</literal> trusts
several certificates
<link xlink:href="https://curl.haxx.se/docs/caextract.html">from the
Mozilla database</link> by default. However several companies using Citrix
might require their own corporate certificate. On distros with imperative
packaging these certs can be stored easily in
<link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>,
however this directory is a store path in <literal>nixpkgs</literal>. In
order to work around this issue the package provides a simple mechanism to
add custom certificates without rebuilding the entire package using
<literal>symlinkJoin</literal>:
<programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_receiver.override {
inherit extraCerts;
}]]>
</programlisting>
</para>
</section>
</section>
<section xml:id="sec-ibus-typing-booster"> <section xml:id="sec-ibus-typing-booster">
<title>ibus-engines.typing-booster</title> <title>ibus-engines.typing-booster</title>
@ -853,7 +506,7 @@ citrix_receiver.override {
<para> <para>
The IBus engine is based on <literal>hunspell</literal> to support The IBus engine is based on <literal>hunspell</literal> to support
completion in many languages. By default the dictionaries completion in many languages. By default the dictionaries
<literal>de-de</literal>, <literal>en-us</literal>, <literal>de-de</literal>, <literal>en-us</literal>, <literal>fr-moderne</literal>
<literal>es-es</literal>, <literal>it-it</literal>, <literal>es-es</literal>, <literal>it-it</literal>,
<literal>sv-se</literal> and <literal>sv-fi</literal> are in use. To add <literal>sv-se</literal> and <literal>sv-fi</literal> are in use. To add
another dictionary, the package can be overridden like this: another dictionary, the package can be overridden like this:
@ -886,4 +539,52 @@ citrix_receiver.override {
</para> </para>
</section> </section>
</section> </section>
<section xml:id="sec-nginx">
<title>Nginx</title>
<para>
<link xlink:href="https://nginx.org/">Nginx</link> is a
reverse proxy and lightweight webserver.
</para>
<section xml:id="sec-nginx-etag">
<title>ETags on static files served from the Nix store</title>
<para>
HTTP has a couple different mechanisms for caching to prevent
clients from having to download the same content repeatedly
if a resource has not changed since the last time it was requested.
When nginx is used as a server for static files, it implements
the caching mechanism based on the
<link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified"><literal>Last-Modified</literal></link>
response header automatically; unfortunately, it works by using
filesystem timestamps to determine the value of the
<literal>Last-Modified</literal> header. This doesn't give the
desired behavior when the file is in the Nix store, because all
file timestamps are set to 0 (for reasons related to build
reproducibility).
</para>
<para>
Fortunately, HTTP supports an alternative (and more effective)
caching mechanism: the
<link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag"><literal>ETag</literal></link>
response header. The value of the <literal>ETag</literal> header
specifies some identifier for the particular content that the
server is sending (e.g. a hash). When a client makes a second
request for the same resource, it sends that value back in an
<literal>If-None-Match</literal> header. If the ETag value is
unchanged, then the server does not need to resend the content.
</para>
<para>
As of NixOS 19.09, the nginx package in Nixpkgs is patched such
that when nginx serves a file out of <filename>/nix/store</filename>,
the hash in the store path is used as the <literal>ETag</literal>
header in the HTTP response, thus providing proper caching functionality.
This happens automatically; you do not need to do modify any
configuration to get this behavior.
</para>
</section>
</section>
</chapter> </chapter>

View file

@ -0,0 +1,482 @@
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="package-specific-user-notes">
<title>Package-specific usage notes</title>
<para>
These chapters includes some notes
that apply to specific packages and should
answer some of the frequently asked questions
related to Nixpkgs use.
Some useful information related to package use
can be found in <link linkend="chap-package-notes">package-specific development notes</link>.
</para>
<section xml:id="opengl">
<title>OpenGL</title>
<para>
Packages that use OpenGL have NixOS desktop as their primary target. The
current solution for loading the GPU-specific drivers is based on
<literal>libglvnd</literal> and looks for the driver implementation in
<literal>LD_LIBRARY_PATH</literal>. If you are using a non-NixOS
GNU/Linux/X11 desktop with free software video drivers, consider launching
OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of
<literal>libglvnd</literal> and <literal>mesa_drivers</literal> in
<literal>LD_LIBRARY_PATH</literal>. For proprietary video drivers you might
have luck with also adding the corresponding video driver package.
</para>
</section>
<section xml:id="locales">
<title>Locales</title>
<para>
To allow simultaneous use of packages linked against different versions of
<literal>glibc</literal> with different locale archive formats Nixpkgs
patches <literal>glibc</literal> to rely on
<literal>LOCALE_ARCHIVE</literal> environment variable.
</para>
<para>
On non-NixOS distributions this variable is obviously not set. This can
cause regressions in language support or even crashes in some
Nixpkgs-provided programs. The simplest way to mitigate this problem is
exporting the <literal>LOCALE_ARCHIVE</literal> variable pointing to
<literal>${glibcLocales}/lib/locale/locale-archive</literal>. The drawback
(and the reason this is not the default) is the relatively large (a hundred
MiB) size of the full set of locales. It is possible to build a custom set
of locales by overriding parameters <literal>allLocales</literal> and
<literal>locales</literal> of the package.
</para>
</section>
<section xml:id="sec-emacs">
<title>Emacs</title>
<section xml:id="sec-emacs-config">
<title>Configuring Emacs</title>
<para>
The Emacs package comes with some extra helpers to make it easier to
configure. <varname>emacsWithPackages</varname> allows you to manage
packages from ELPA. This means that you will not have to install that
packages from within Emacs. For instance, if you wanted to use
<literal>company</literal>, <literal>counsel</literal>,
<literal>flycheck</literal>, <literal>ivy</literal>,
<literal>magit</literal>, <literal>projectile</literal>, and
<literal>use-package</literal> you could use this as a
<filename>~/.config/nixpkgs/config.nix</filename> override:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}
</screen>
<para>
You can install it like any other packages via <command>nix-env -iA
myEmacs</command>. However, this will only install those packages. It will
not <literal>configure</literal> them for us. To do this, we need to
provide a configuration file. Luckily, it is possible to do this from
within Nix! By modifying the above example, we can make Emacs load a custom
config file. The key is to create a package that provide a
<filename>default.el</filename> file in
<filename>/share/emacs/site-start/</filename>. Emacs knows to load this
file automatically when it starts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))
;; load some packages
(use-package company
:bind ("&lt;C-tab&gt;" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))
(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))
(use-package flycheck
:defer 2
:config (global-flycheck-mode))
(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))
(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init
(setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
};
}
</screen>
<para>
This provides a fairly full Emacs start file. It will load in addition to
the user's presonal config. You can always disable it by passing
<command>-q</command> to the Emacs command.
</para>
<para>
Sometimes <varname>emacsWithPackages</varname> is not enough, as this
package set has some priorities imposed on packages (with the lowest
priority assigned to Melpa Unstable, and the highest for packages manually
defined in <filename>pkgs/top-level/emacs-packages.nix</filename>). But you
can't control this priorities when some package is installed as a
dependency. You can override it on per-package-basis, providing all the
required dependencies manually - but it's tedious and there is always a
possibility that an unwanted dependency will sneak in through some other
package. To completely override such a package you can use
<varname>overrideScope'</varname>.
</para>
<screen>
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesGen emacs).overrideScope' overrides).emacsWithPackages (p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod
dante
])
</screen>
</section>
</section>
<section xml:id="dlib">
<title>DLib</title>
<para>
<link xlink:href="http://dlib.net/">DLib</link> is a modern, C++-based toolkit which
provides several machine learning algorithms.
</para>
<section xml:id="compiling-without-avx-support">
<title>Compiling without AVX support</title>
<para>
Especially older CPUs don't support
<link xlink:href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</link>
(<abbrev>Advanced Vector Extensions</abbrev>) instructions that are used by DLib to
optimize their algorithms.
</para>
<para>
On the affected hardware errors like <literal>Illegal instruction</literal> will occur.
In those cases AVX support needs to be disabled:
<programlisting>self: super: {
dlib = super.dlib.override { avxSupport = false; };
}</programlisting>
</para>
</section>
</section>
<section xml:id="unfree-software">
<title>Unfree software</title>
<para>
All users of Nixpkgs are free software users, and many users (and
developers) of Nixpkgs want to limit and tightly control their exposure to
unfree software. At the same time, many users need (or want)
to run some specific
pieces of proprietary software. Nixpkgs includes some expressions for unfree
software packages. By default unfree software cannot be installed and
doesnt show up in searches. To allow installing unfree software in a
single Nix invocation one can export
<literal>NIXPKGS_ALLOW_UNFREE=1</literal>. For a persistent solution, users
can set <literal>allowUnfree</literal> in the Nixpkgs configuration.
</para>
<para>
Fine-grained control is possible by defining
<literal>allowUnfreePredicate</literal> function in config; it takes the
<literal>mkDerivation</literal> parameter attrset and returns
<literal>true</literal> for unfree packages that should be allowed.
</para>
</section>
<section xml:id="sec-steam">
<title>Steam</title>
<section xml:id="sec-steam-nix">
<title>Steam in Nix</title>
<para>
Steam is distributed as a <filename>.deb</filename> file, for now only as
an i686 package (the amd64 package only has documentation). When unpacked,
it has a script called <filename>steam</filename> that in Ubuntu (their
target distro) would go to <filename>/usr/bin </filename>. When run for the
first time, this script copies some files to the user's home, which include
another script that is the ultimate responsible for launching the steam
binary, which is also in $HOME.
</para>
<para>
Nix problems and constraints:
<itemizedlist>
<listitem>
<para>
We don't have <filename>/bin/bash</filename> and many scripts point
there. Similarly for <filename>/usr/bin/python</filename> .
</para>
</listitem>
<listitem>
<para>
We don't have the dynamic loader in <filename>/lib </filename>.
</para>
</listitem>
<listitem>
<para>
The <filename>steam.sh</filename> script in $HOME can not be patched, as
it is checked and rewritten by steam.
</para>
</listitem>
<listitem>
<para>
The steam binary cannot be patched, it's also checked.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The current approach to deploy Steam in NixOS is composing a FHS-compatible
chroot environment, as documented
<link xlink:href="http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html">here</link>.
This allows us to have binaries in the expected paths without disrupting
the system, and to avoid patching them to work in a non FHS environment.
</para>
</section>
<section xml:id="sec-steam-play">
<title>How to play</title>
<para>
For 64-bit systems it's important to have
<programlisting>hardware.opengl.driSupport32Bit = true;</programlisting>
in your <filename>/etc/nixos/configuration.nix</filename>. You'll also need
<programlisting>hardware.pulseaudio.support32Bit = true;</programlisting>
if you are using PulseAudio - this will enable 32bit ALSA apps integration.
To use the Steam controller or other Steam supported controllers such as
the DualShock 4 or Nintendo Switch Pro, you need to add
<programlisting>hardware.steam-hardware.enable = true;</programlisting>
to your configuration.
</para>
</section>
<section xml:id="sec-steam-troub">
<title>Troubleshooting</title>
<para>
<variablelist>
<varlistentry>
<term>
Steam fails to start. What do I do?
</term>
<listitem>
<para>
Try to run
<programlisting>strace steam</programlisting>
to see what is causing steam to fail.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Using the FOSS Radeon or nouveau (nvidia) drivers
</term>
<listitem>
<itemizedlist>
<listitem>
<para>
The <literal>newStdcpp</literal> parameter was removed since NixOS
17.09 and should not be needed anymore.
</para>
</listitem>
<listitem>
<para>
Steam ships statically linked with a version of libcrypto that
conflics with the one dynamically loaded by radeonsi_dri.so. If you
get the error
<programlisting>steam.sh: line 713: 7842 Segmentation fault (core dumped)</programlisting>
have a look at
<link xlink:href="https://github.com/NixOS/nixpkgs/pull/20269">this
pull request</link>.
</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry>
<term>
Java
</term>
<listitem>
<orderedlist>
<listitem>
<para>
There is no java in steam chrootenv by default. If you get a message
like
<programlisting>/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found</programlisting>
You need to add
<programlisting> steam.override { withJava = true; };</programlisting>
to your configuration.
</para>
</listitem>
</orderedlist>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
<section xml:id="sec-steam-run">
<title>steam-run</title>
<para>
The FHS-compatible chroot used for steam can also be used to run other
linux games that expect a FHS environment. To do it, add
<programlisting>pkgs.(steam.override {
nativeOnly = true;
newStdcpp = true;
}).run</programlisting>
to your configuration, rebuild, and run the game with
<programlisting>steam-run ./foo</programlisting>
</para>
</section>
</section>
<section xml:id="sec-citrix">
<title>Citrix Receiver &amp; Citrix Workspace App</title>
<para>
<note>
<para>
Please note that the <literal>citrix_receiver</literal> package has been deprecated since its
development was <link xlink:href="https://docs.citrix.com/en-us/citrix-workspace-app.html">discontinued by upstream</link>
and will be replaced by <link xlink:href="https://www.citrix.com/products/workspace-app/">the citrix workspace app</link>.
</para>
</note>
<link xlink:href="https://www.citrix.com/products/receiver/">Citrix Receiver</link> and
<link xlink:href="https://www.citrix.com/products/workspace-app/">Citrix Workspace App</link>
are a remote desktop viewers which provide access to
<link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link>
installations.
</para>
<section xml:id="sec-citrix-base">
<title>Basic usage</title>
<para>
The tarball archive needs to be downloaded manually as the license
agreements of the vendor for
<link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">Citrix Receiver</link>
or <link xlink:href="https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html">Citrix Workspace</link>
need to be accepted first.
Then run <command>nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz</command>.
With the archive available
in the store the package can be built and installed with Nix.
</para>
<warning>
<title>Caution with <command>nix-shell</command> installs</title>
<para>
It's recommended to install <literal>Citrix Receiver</literal>
and/or <literal>Citrix Workspace</literal> using
<literal>nix-env -i</literal> or globally to
ensure that the <literal>.desktop</literal> files are installed properly
into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't be possible to
open <literal>.ica</literal> files automatically from the browser to start
a Citrix connection.
</para>
</warning>
</section>
<section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title>
<para>
The <literal>Citrix Receiver</literal> and <literal>Citrix Workspace App</literal>
in <literal>nixpkgs</literal> trust several certificates
<link xlink:href="https://curl.haxx.se/docs/caextract.html">from the
Mozilla database</link> by default. However several companies using Citrix
might require their own corporate certificate. On distros with imperative
packaging these certs can be stored easily in
<link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>,
however this directory is a store path in <literal>nixpkgs</literal>. In
order to work around this issue the package provides a simple mechanism to
add custom certificates without rebuilding the entire package using
<literal>symlinkJoin</literal>:
<programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_workspace.override { # the same applies for `citrix_receiver` if used.
inherit extraCerts;
}]]>
</programlisting>
</para>
</section>
</section>
</chapter>

View file

@ -20,14 +20,14 @@
scripts. scripts.
</para> </para>
<programlisting> <programlisting>
stdenv.mkDerivation { stdenv.mkDerivation {
name = "libfoo-1.2.3"; name = "libfoo-1.2.3";
# ... # ...
buildPhase = '' buildPhase = ''
$CC -o hello hello.c $CC -o hello hello.c
''; '';
} }
</programlisting> </programlisting>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
@ -39,12 +39,12 @@
<function>fixupPhase</function>. <function>fixupPhase</function>.
</para> </para>
<programlisting> <programlisting>
stdenv.mkDerivation { stdenv.mkDerivation {
name = "libfoo-1.2.3"; name = "libfoo-1.2.3";
# ... # ...
makeFlags = stdenv.lib.optional stdenv.isDarwin "LDFLAGS=-Wl,-install_name,$(out)/lib/libfoo.dylib"; makeFlags = stdenv.lib.optional stdenv.isDarwin "LDFLAGS=-Wl,-install_name,$(out)/lib/libfoo.dylib";
} }
</programlisting> </programlisting>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
@ -62,19 +62,19 @@
<manvolnum>1</manvolnum></citerefentry> manpage. <manvolnum>1</manvolnum></citerefentry> manpage.
</para> </para>
<programlisting> <programlisting>
dyld: Library not loaded: /nix/store/7hnmbscpayxzxrixrgxvvlifzlxdsdir-jq-1.5-lib/lib/libjq.1.dylib dyld: Library not loaded: /nix/store/7hnmbscpayxzxrixrgxvvlifzlxdsdir-jq-1.5-lib/lib/libjq.1.dylib
Referenced from: /private/tmp/nix-build-jq-1.5.drv-0/jq-1.5/tests/../jq Referenced from: /private/tmp/nix-build-jq-1.5.drv-0/jq-1.5/tests/../jq
Reason: image not found Reason: image not found
./tests/jqtest: line 5: 75779 Abort trap: 6 ./tests/jqtest: line 5: 75779 Abort trap: 6
</programlisting> </programlisting>
<programlisting> <programlisting>
stdenv.mkDerivation { stdenv.mkDerivation {
name = "libfoo-1.2.3"; name = "libfoo-1.2.3";
# ... # ...
doInstallCheck = true; doInstallCheck = true;
installCheckTarget = "check"; installCheckTarget = "check";
} }
</programlisting> </programlisting>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
@ -85,19 +85,19 @@
on xcode. on xcode.
</para> </para>
<programlisting> <programlisting>
stdenv.mkDerivation { stdenv.mkDerivation {
name = "libfoo-1.2.3"; name = "libfoo-1.2.3";
# ... # ...
prePatch = '' prePatch = ''
substituteInPlace Makefile \ substituteInPlace Makefile \
--replace '/usr/bin/xcrun clang' clang --replace '/usr/bin/xcrun clang' clang
''; '';
} }
</programlisting> </programlisting>
<para> <para>
The package <literal>xcbuild</literal> can be used to build projects that The package <literal>xcbuild</literal> can be used to build projects that
really depend on Xcode. However, this replacement is not 100% really depend on Xcode. However, this replacement is not 100% compatible
compatible with Xcode and can occasionally cause issues. with Xcode and can occasionally cause issues.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>

View file

@ -9,8 +9,8 @@
<para> <para>
Checkout the Nixpkgs source tree: Checkout the Nixpkgs source tree:
<screen> <screen>
$ git clone https://github.com/NixOS/nixpkgs <prompt>$ </prompt>git clone https://github.com/NixOS/nixpkgs
$ cd nixpkgs</screen> <prompt>$ </prompt>cd nixpkgs</screen>
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -23,7 +23,7 @@ $ cd nixpkgs</screen>
See <xref linkend="sec-organisation" /> for some hints on the tree See <xref linkend="sec-organisation" /> for some hints on the tree
organisation. Create a directory for your package, e.g. organisation. Create a directory for your package, e.g.
<screen> <screen>
$ mkdir pkgs/development/libraries/libfoo</screen> <prompt>$ </prompt>mkdir pkgs/development/libraries/libfoo</screen>
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -34,8 +34,8 @@ $ mkdir pkgs/development/libraries/libfoo</screen>
as arguments, and returns a build of the package in the Nix store. The as arguments, and returns a build of the package in the Nix store. The
expression should usually be called <filename>default.nix</filename>. expression should usually be called <filename>default.nix</filename>.
<screen> <screen>
$ emacs pkgs/development/libraries/libfoo/default.nix <prompt>$ </prompt>emacs pkgs/development/libraries/libfoo/default.nix
$ git add pkgs/development/libraries/libfoo/default.nix</screen> <prompt>$ </prompt>git add pkgs/development/libraries/libfoo/default.nix</screen>
</para> </para>
<para> <para>
You can have a look at the existing Nix expressions under You can have a look at the existing Nix expressions under
@ -148,8 +148,8 @@ $ git add pkgs/development/libraries/libfoo/default.nix</screen>
<listitem> <listitem>
<para> <para>
You can use <command>nix-prefetch-url</command> You can use <command>nix-prefetch-url</command>
<replaceable>url</replaceable> to get the <replaceable>url</replaceable> to get the SHA-256 hash of source
SHA-256 hash of source distributions. There are similar commands as distributions. There are similar commands as
<command>nix-prefetch-git</command> and <command>nix-prefetch-git</command> and
<command>nix-prefetch-hg</command> available in <command>nix-prefetch-hg</command> available in
<literal>nix-prefetch-scripts</literal> package. <literal>nix-prefetch-scripts</literal> package.
@ -180,7 +180,7 @@ $ git add pkgs/development/libraries/libfoo/default.nix</screen>
with some descriptive name for the variable, e.g. with some descriptive name for the variable, e.g.
<varname>libfoo</varname>. <varname>libfoo</varname>.
<screen> <screen>
$ emacs pkgs/top-level/all-packages.nix</screen> <prompt>$ </prompt>emacs pkgs/top-level/all-packages.nix</screen>
</para> </para>
<para> <para>
The attributes in that file are sorted by category (like “Development / The attributes in that file are sorted by category (like “Development /
@ -193,7 +193,7 @@ $ emacs pkgs/top-level/all-packages.nix</screen>
To test whether the package builds, run the following command from the To test whether the package builds, run the following command from the
root of the nixpkgs source tree: root of the nixpkgs source tree:
<screen> <screen>
$ nix-build -A libfoo</screen> <prompt>$ </prompt>nix-build -A libfoo</screen>
where <varname>libfoo</varname> should be the variable name defined in the where <varname>libfoo</varname> should be the variable name defined in the
previous step. You may want to add the flag <option>-K</option> to keep previous step. You may want to add the flag <option>-K</option> to keep
the temporary build directory in case something fails. If the build the temporary build directory in case something fails. If the build
@ -205,13 +205,17 @@ $ nix-build -A libfoo</screen>
<para> <para>
If you want to install the package into your profile (optional), do If you want to install the package into your profile (optional), do
<screen> <screen>
$ nix-env -f . -iA libfoo</screen> <prompt>$ </prompt>nix-env -f . -iA libfoo</screen>
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Optionally commit the new package and open a pull request, or send a patch Optionally commit the new package and open a pull request <link
to <literal>https://groups.google.com/forum/#!forum/nix-devel</literal>. xlink:href="https://github.com/NixOS/nixpkgs/pulls">to nixpkgs</link>, or
use <link
xlink:href="https://discourse.nixos.org/t/about-the-patches-category/477">
the Patches category</link> on Discourse for sending a patch without a
GitHub account.
</para> </para>
</listitem> </listitem>
</orderedlist> </orderedlist>

View file

@ -24,11 +24,13 @@
<para> <para>
The high change rate of Nixpkgs makes any pull request that remains open for The high change rate of Nixpkgs makes any pull request that remains open for
too long subject to conflicts that will require extra work from the submitter too long subject to conflicts that will require extra work from the submitter
or the merger. Reviewing pull requests in a timely manner and being responsive or the merger. Reviewing pull requests in a timely manner and being
to the comments is the key to avoid this issue. GitHub provides sort filters responsive to the comments is the key to avoid this issue. GitHub provides
that can be used to see the <link sort filters that can be used to see the
<link
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc">most xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc">most
recently</link> and the <link recently</link> and the
<link
xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-asc">least xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-asc">least
recently</link> updated pull requests. We highly encourage looking at recently</link> updated pull requests. We highly encourage looking at
<link xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+review%3Anone+status%3Asuccess+-label%3A%222.status%3A+work-in-progress%22+no%3Aproject+no%3Aassignee+no%3Amilestone"> <link xlink:href="https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+review%3Anone+status%3Asuccess+-label%3A%222.status%3A+work-in-progress%22+no%3Aproject+no%3Aassignee+no%3Amilestone">
@ -151,11 +153,11 @@
nixpkgs-unstable for easier review by running the following commands nixpkgs-unstable for easier review by running the following commands
from a nixpkgs clone. from a nixpkgs clone.
<screen> <screen>
$ git remote add channels https://github.com/NixOS/nixpkgs-channels.git <co <prompt>$ </prompt>git remote add channels https://github.com/NixOS/nixpkgs-channels.git <co
xml:id='reviewing-rebase-1' /> xml:id='reviewing-rebase-1' />
$ git fetch channels nixos-unstable <co xml:id='reviewing-rebase-2' /> <prompt>$ </prompt>git fetch channels nixos-unstable <co xml:id='reviewing-rebase-2' />
$ git fetch origin pull/PRNUMBER/head <co xml:id='reviewing-rebase-3' /> <prompt>$ </prompt>git fetch origin pull/PRNUMBER/head <co xml:id='reviewing-rebase-3' />
$ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD <co <prompt>$ </prompt>git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD <co
xml:id='reviewing-rebase-4' /> xml:id='reviewing-rebase-4' />
</screen> </screen>
<calloutlist> <calloutlist>
@ -187,14 +189,15 @@ $ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD <co
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The <link xlink:href="https://github.com/madjar/nox">nox</link> tool can The
be used to review a pull request content in a single command. It doesn't <link xlink:href="https://github.com/Mic92/nix-review">nix-review</link>
rebase on a channel branch so it might trigger multiple source builds. tool can be used to review a pull request content in a single command.
<varname>PRNUMBER</varname> should be replaced by the number at the end <varname>PRNUMBER</varname> should be replaced by the number at the end
of the pull request title. of the pull request title. You can also provide the full github pull
request url.
</para> </para>
<screen> <screen>
$ nix-shell -p nox --run "nox-review -k pr PRNUMBER" <prompt>$ </prompt>nix-shell -p nix-review --run "nix-review pr PRNUMBER"
</screen> </screen>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -609,8 +612,8 @@ policy.
create an issue or post on create an issue or post on
<link <link
xlink:href="https://discourse.nixos.org">Discourse</link> with xlink:href="https://discourse.nixos.org">Discourse</link> with
references of packages and modules they maintain so the maintainership can be references of packages and modules they maintain so the maintainership can
taken over by other contributors. be taken over by other contributors.
</para> </para>
</section> </section>
</chapter> </chapter>

File diff suppressed because it is too large Load diff

View file

@ -36,8 +36,8 @@
</listitem> </listitem>
</itemizedlist> </itemizedlist>
<screen> <screen>
$ git checkout 0998212 <prompt>$ </prompt>git checkout 0998212
$ git checkout -b 'fix/pkg-name-update' <prompt>$ </prompt>git checkout -b 'fix/pkg-name-update'
</screen> </screen>
</para> </para>
</listitem> </listitem>
@ -351,25 +351,24 @@ Additional information.
</section> </section>
<section xml:id="submitting-changes-tested-compilation"> <section xml:id="submitting-changes-tested-compilation">
<title>Tested compilation of all pkgs that depend on this change using <command>nox-review</command></title> <title>Tested compilation of all pkgs that depend on this change using <command>nix-review</command></title>
<para> <para>
If you are updating a package's version, you can use nox to make sure all If you are updating a package's version, you can use nix-review to make
packages that depend on the updated package still compile correctly. This sure all packages that depend on the updated package still compile
can be done using the nox utility. The <command>nox-review</command> correctly. The <command>nix-review</command> utility can look for and build
utility can look for and build all dependencies either based on uncommited all dependencies either based on uncommited changes with the
changes with the <literal>wip</literal> option or specifying a github pull <literal>wip</literal> option or specifying a github pull request number.
request number.
</para>
<para>
review uncommitted changes:
<screen>nix-shell -p nox --run "nox-review wip"</screen>
</para> </para>
<para> <para>
review changes from pull request number 12345: review changes from pull request number 12345:
<screen>nix-shell -p nox --run "nox-review pr 12345"</screen> <screen>nix-shell -p nix-review --run "nix-review pr 12345"</screen>
</para>
<para>
review uncommitted changes:
<screen>nix-shell -p nix-review --run "nix-review wip"</screen>
</para> </para>
</section> </section>
@ -515,7 +514,7 @@ The original commit message describing the reason why the world was torn apart.
(cherry picked from commit abcdef) (cherry picked from commit abcdef)
Reason: I just had a gut feeling that this would also be wanted by people from Reason: I just had a gut feeling that this would also be wanted by people from
the stone age. the stone age.
</screen> </screen>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</section> </section>

View file

@ -50,7 +50,7 @@ let
filesystem = callLibs ./filesystem.nix; filesystem = callLibs ./filesystem.nix;
# back-compat aliases # back-compat aliases
platforms = systems.forMeta; platforms = systems.doubles;
inherit (builtins) add addErrorContext attrNames concatLists inherit (builtins) add addErrorContext attrNames concatLists
deepSeq elem elemAt filter genericClosure genList getAttr deepSeq elem elemAt filter genericClosure genList getAttr
@ -59,7 +59,7 @@ let
stringLength sub substring tail; stringLength sub substring tail;
inherit (trivial) id const concat or and bitAnd bitOr bitXor bitNot inherit (trivial) id const concat or and bitAnd bitOr bitXor bitNot
boolToString mergeAttrs flip mapNullable inNixShell min max boolToString mergeAttrs flip mapNullable inNixShell min max
importJSON warn info nixpkgsVersion version mod compare importJSON warn info showWarnings nixpkgsVersion version mod compare
splitByAndCompare functionArgs setFunctionArgs isFunction; splitByAndCompare functionArgs setFunctionArgs isFunction;
inherit (fixedPoints) fix fix' converge extends composeExtensions inherit (fixedPoints) fix fix' converge extends composeExtensions
makeExtensible makeExtensibleWithCustomName; makeExtensible makeExtensibleWithCustomName;
@ -71,7 +71,7 @@ let
zipAttrsWithNames zipAttrsWith zipAttrs recursiveUpdateUntil zipAttrsWithNames zipAttrsWith zipAttrs recursiveUpdateUntil
recursiveUpdate matchAttrs overrideExisting getOutput getBin recursiveUpdate matchAttrs overrideExisting getOutput getBin
getLib getDev chooseDevOutputs zipWithNames zip; getLib getDev chooseDevOutputs zipWithNames zip;
inherit (lists) singleton foldr fold foldl foldl' imap0 imap1 inherit (lists) singleton forEach foldr fold foldl foldl' imap0 imap1
concatMap flatten remove findSingle findFirst any all count concatMap flatten remove findSingle findFirst any all count
optional optionals toList range partition zipListsWith zipLists optional optionals toList range partition zipListsWith zipLists
reverseList listDfs toposort sort naturalSort compareLists take reverseList listDfs toposort sort naturalSort compareLists take
@ -81,7 +81,7 @@ let
intersperse concatStringsSep concatMapStringsSep intersperse concatStringsSep concatMapStringsSep
concatImapStringsSep makeSearchPath makeSearchPathOutput concatImapStringsSep makeSearchPath makeSearchPathOutput
makeLibraryPath makeBinPath optionalString makeLibraryPath makeBinPath optionalString
hasPrefix hasSuffix stringToCharacters stringAsChars escape hasInfix hasPrefix hasSuffix stringToCharacters stringAsChars escape
escapeShellArg escapeShellArgs replaceChars lowerChars escapeShellArg escapeShellArgs replaceChars lowerChars
upperChars toLower toUpper addContextFrom splitString upperChars toLower toUpper addContextFrom splitString
removePrefix removeSuffix versionOlder versionAtLeast getVersion removePrefix removeSuffix versionOlder versionAtLeast getVersion
@ -109,7 +109,7 @@ let
mkFixStrictness mkOrder mkBefore mkAfter mkAliasDefinitions mkFixStrictness mkOrder mkBefore mkAfter mkAliasDefinitions
mkAliasAndWrapDefinitions fixMergeModules mkRemovedOptionModule mkAliasAndWrapDefinitions fixMergeModules mkRemovedOptionModule
mkRenamedOptionModule mkMergedOptionModule mkChangedOptionModule mkRenamedOptionModule mkMergedOptionModule mkChangedOptionModule
mkAliasOptionModule mkAliasOptionModuleWithPriority doRename filterModules; mkAliasOptionModule doRename filterModules;
inherit (options) isOption mkEnableOption mkSinkUndeclaredOptions inherit (options) isOption mkEnableOption mkSinkUndeclaredOptions
mergeDefaultOption mergeOneOption mergeEqualOption getValues mergeDefaultOption mergeOneOption mergeEqualOption getValues
getFiles optionAttrSetToDocList optionAttrSetToDocList' getFiles optionAttrSetToDocList optionAttrSetToDocList'

View file

@ -30,9 +30,12 @@ rec {
# nix-repl> converge (x: x / 2) 16 # nix-repl> converge (x: x / 2) 16
# 0 # 0
converge = f: x: converge = f: x:
if (f x) == x let
then x x' = f x;
else converge f (f x); in
if x' == x
then x
else converge f x';
# Modify the contents of an explicitly recursive attribute set in a way that # Modify the contents of an explicitly recursive attribute set in a way that
# honors `self`-references. This is accomplished with a function # honors `self`-references. This is accomplished with a function

View file

@ -178,7 +178,7 @@ rec {
toPlist = {}: v: let toPlist = {}: v: let
isFloat = builtins.isFloat or (x: false); isFloat = builtins.isFloat or (x: false);
expr = ind: x: with builtins; expr = ind: x: with builtins;
if isNull x then "" else if x == null then "" else
if isBool x then bool ind x else if isBool x then bool ind x else
if isInt x then int ind x else if isInt x then int ind x else
if isString x then str ind x else if isString x then str ind x else

View file

@ -145,6 +145,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
free = false; free = false;
}; };
cc-by-nc-30 = spdx {
spdxId = "CC-BY-NC-3.0";
fullName = "Creative Commons Attribution Non Commercial 3.0 Unported";
free = false;
};
cc-by-nc-40 = spdx { cc-by-nc-40 = spdx {
spdxId = "CC-BY-NC-4.0"; spdxId = "CC-BY-NC-4.0";
fullName = "Creative Commons Attribution Non Commercial 4.0 International"; fullName = "Creative Commons Attribution Non Commercial 4.0 International";
@ -428,12 +434,12 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
lgpl21 = spdx { lgpl21 = spdx {
spdxId = "LGPL-2.1-only"; spdxId = "LGPL-2.1-only";
fullName = "GNU Library General Public License v2.1 only"; fullName = "GNU Lesser General Public License v2.1 only";
}; };
lgpl21Plus = spdx { lgpl21Plus = spdx {
spdxId = "LGPL-2.1-or-later"; spdxId = "LGPL-2.1-or-later";
fullName = "GNU Library General Public License v2.1 or later"; fullName = "GNU Lesser General Public License v2.1 or later";
}; };
lgpl3 = spdx { lgpl3 = spdx {
@ -451,9 +457,9 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "libpng License"; fullName = "libpng License";
}; };
libpng2 = { libpng2 = spdx {
fullName = "libpng License v2"; # 1.6.36+ spdxId = "libpng-2.0"; # Used since libpng 1.6.36.
url = "http://www.libpng.org/pub/png/src/libpng-LICENSE.txt"; fullName = "PNG Reference Library version 2";
}; };
libtiff = spdx { libtiff = spdx {
@ -561,6 +567,11 @@ lib.mapAttrs (n: v: v // { shortName = n; }) rec {
fullName = "OpenSSL License"; fullName = "OpenSSL License";
}; };
osl2 = spdx {
spdxId = "OSL-2.0";
fullName = "Open Software License 2.0";
};
osl21 = spdx { osl21 = spdx {
spdxId = "OSL-2.1"; spdxId = "OSL-2.1";
fullName = "Open Software License 2.1"; fullName = "Open Software License 2.1";

View file

@ -7,7 +7,7 @@ let
in in
rec { rec {
inherit (builtins) head tail length isList elemAt concatLists filter elem genList; inherit (builtins) head tail length isList elemAt concatLists filter elem genList map;
/* Create a list consisting of a single element. `singleton x` is /* Create a list consisting of a single element. `singleton x` is
sometimes more convenient with respect to indentation than `[x]` sometimes more convenient with respect to indentation than `[x]`
@ -21,6 +21,19 @@ rec {
*/ */
singleton = x: [x]; singleton = x: [x];
/* Apply the function to each element in the list. Same as `map`, but arguments
flipped.
Type: forEach :: [a] -> (a -> b) -> [b]
Example:
forEach [ 1 2 ] (x:
toString x
)
=> [ "1" "2" ]
*/
forEach = xs: f: map f xs;
/* right fold a binary function `op` between successive elements of /* right fold a binary function `op` between successive elements of
`list` with `nul' as the starting value, i.e., `list` with `nul' as the starting value, i.e.,
`foldr op nul [x_1 x_2 ... x_n] == op x_1 (op x_2 ... (op x_n nul))`. `foldr op nul [x_1 x_2 ... x_n] == op x_1 (op x_2 ... (op x_n nul))`.
@ -633,8 +646,7 @@ rec {
else else
let let
x = head list; x = head list;
xs = unique (drop 1 list); in [x] ++ unique (remove x list);
in [x] ++ remove x xs;
/* Intersects list 'e' and another list. O(nm) complexity. /* Intersects list 'e' and another list. O(nm) complexity.

View file

@ -323,16 +323,14 @@ rec {
else else
mergeDefinitions loc opt.type defs'; mergeDefinitions loc opt.type defs';
# Check whether the option is defined, and apply the apply
# function to the merged value. This allows options to yield a # The value with a check that it is defined
# value computed from the definitions. valueDefined = if res.isDefined then res.mergedValue else
value = throw "The option `${showOption loc}' is used but not defined.";
if !res.isDefined then
throw "The option `${showOption loc}' is used but not defined." # Apply the 'apply' function to the merged value. This allows options to
else if opt ? apply then # yield a value computed from the definitions
opt.apply res.mergedValue value = if opt ? apply then opt.apply valueDefined else valueDefined;
else
res.mergedValue;
in opt // in opt //
{ value = builtins.addErrorContext "while evaluating the option `${showOption loc}':" value; { value = builtins.addErrorContext "while evaluating the option `${showOption loc}':" value;
@ -476,8 +474,22 @@ rec {
optionSet to options of type submodule. FIXME: remove optionSet to options of type submodule. FIXME: remove
eventually. */ eventually. */
fixupOptionType = loc: opt: fixupOptionType = loc: opt:
if opt.type.getSubModules or null == null let
then opt // { type = opt.type or types.unspecified; } options = opt.options or
(throw "Option `${showOption loc'}' has type optionSet but has no option attribute, in ${showFiles opt.declarations}.");
f = tp:
let optionSetIn = type: (tp.name == type) && (tp.functor.wrapped.name == "optionSet");
in
if tp.name == "option set" || tp.name == "submodule" then
throw "The option ${showOption loc} uses submodules without a wrapping type, in ${showFiles opt.declarations}."
else if optionSetIn "attrsOf" then types.attrsOf (types.submodule options)
else if optionSetIn "loaOf" then types.loaOf (types.submodule options)
else if optionSetIn "listOf" then types.listOf (types.submodule options)
else if optionSetIn "nullOr" then types.nullOr (types.submodule options)
else tp;
in
if opt.type.getSubModules or null == null
then opt // { type = f (opt.type or types.unspecified); }
else opt // { type = opt.type.substSubModules opt.options; options = []; }; else opt // { type = opt.type.substSubModules opt.options; options = []; };
@ -596,6 +608,9 @@ rec {
forwards any definitions of boot.copyKernels to forwards any definitions of boot.copyKernels to
boot.loader.grub.copyKernels while printing a warning. boot.loader.grub.copyKernels while printing a warning.
This also copies over the priority from the aliased option to the
non-aliased option.
*/ */
mkRenamedOptionModule = from: to: doRename { mkRenamedOptionModule = from: to: doRename {
inherit from to; inherit from to;
@ -690,16 +705,7 @@ rec {
use = id; use = id;
}; };
/* Like mkAliasOptionModule, but copy over the priority of the option as well. */ doRename = { from, to, visible, warn, use, withPriority ? true }:
mkAliasOptionModuleWithPriority = from: to: doRename {
inherit from to;
visible = true;
warn = false;
use = id;
withPriority = true;
};
doRename = { from, to, visible, warn, use, withPriority ? false }:
{ config, options, ... }: { config, options, ... }:
let let
fromOpt = getAttrFromPath from options; fromOpt = getAttrFromPath from options;

View file

@ -36,7 +36,7 @@ rec {
example ? null, example ? null,
# String describing the option. # String describing the option.
description ? null, description ? null,
# Related packages used in the manual (see `genRelatedPackages` in ../nixos/doc/manual/default.nix). # Related packages used in the manual (see `genRelatedPackages` in ../nixos/lib/make-options-doc/default.nix).
relatedPackages ? null, relatedPackages ? null,
# Option type, providing type-checking and value merging. # Option type, providing type-checking and value merging.
type ? null, type ? null,
@ -48,6 +48,8 @@ rec {
visible ? null, visible ? null,
# Whether the option can be set only once # Whether the option can be set only once
readOnly ? null, readOnly ? null,
# Deprecated, used by types.optionSet.
options ? null
} @ attrs: } @ attrs:
attrs // { _type = "option"; }; attrs // { _type = "option"; };
@ -99,7 +101,7 @@ rec {
mergeOneOption = loc: defs: mergeOneOption = loc: defs:
if defs == [] then abort "This case should never happen." if defs == [] then abort "This case should never happen."
else if length defs != 1 then else if length defs != 1 then
throw "The unique option `${showOption loc}' is defined multiple times, in ${showFiles (getFiles defs)}." throw "The unique option `${showOption loc}' is defined multiple times, in:\n - ${concatStringsSep "\n - " (getFiles defs)}."
else (head defs).value; else (head defs).value;
/* "Merge" option definitions by checking that they all have the same value. */ /* "Merge" option definitions by checking that they all have the same value. */
@ -141,7 +143,7 @@ rec {
docOption = rec { docOption = rec {
loc = opt.loc; loc = opt.loc;
name = showOption opt.loc; name = showOption opt.loc;
description = opt.description or (throw "Option `${name}' has no description."); description = opt.description or (lib.warn "Option `${name}' has no description." "This option has no description.");
declarations = filter (x: x != unknownModule) opt.declarations; declarations = filter (x: x != unknownModule) opt.declarations;
internal = opt.internal or false; internal = opt.internal or false;
visible = opt.visible or true; visible = opt.visible or true;

View file

@ -12,8 +12,8 @@ rec {
# Bring in a path as a source, filtering out all Subversion and CVS # Bring in a path as a source, filtering out all Subversion and CVS
# directories, as well as backup files (*~). # directories, as well as backup files (*~).
cleanSourceFilter = name: type: let baseName = baseNameOf (toString name); in ! ( cleanSourceFilter = name: type: let baseName = baseNameOf (toString name); in ! (
# Filter out Subversion and CVS directories. # Filter out version control software files/directories
(type == "directory" && (baseName == ".git" || baseName == ".svn" || baseName == "CVS" || baseName == ".hg")) || (baseName == ".git" || type == "directory" && (baseName == ".svn" || baseName == "CVS" || baseName == ".hg")) ||
# Filter out editor backup / swap files. # Filter out editor backup / swap files.
lib.hasSuffix "~" baseName || lib.hasSuffix "~" baseName ||
builtins.match "^\\.sw[a-z]$" baseName != null || builtins.match "^\\.sw[a-z]$" baseName != null ||
@ -53,12 +53,16 @@ rec {
# Filter sources by a list of regular expressions. # Filter sources by a list of regular expressions.
# #
# E.g. `src = sourceByRegex ./my-subproject [".*\.py$" "^database.sql$"]` # E.g. `src = sourceByRegex ./my-subproject [".*\.py$" "^database.sql$"]`
sourceByRegex = src: regexes: cleanSourceWith { sourceByRegex = src: regexes:
filter = (path: type: let
let relPath = lib.removePrefix (toString src + "/") (toString path); isFiltered = src ? _isLibCleanSourceWith;
in lib.any (re: builtins.match re relPath != null) regexes); origSrc = if isFiltered then src.origSrc else src;
inherit src; in lib.cleanSourceWith {
}; filter = (path: type:
let relPath = lib.removePrefix (toString origSrc + "/") (toString path);
in lib.any (re: builtins.match re relPath != null) regexes);
inherit src;
};
# Get all files ending with the specified suffices from the given # Get all files ending with the specified suffices from the given
# directory or its descendants. E.g. `sourceFilesBySuffices ./dir # directory or its descendants. E.g. `sourceFilesBySuffices ./dir
@ -83,7 +87,7 @@ rec {
# Sometimes git stores the commitId directly in the file but # Sometimes git stores the commitId directly in the file but
# sometimes it stores something like: «ref: refs/heads/branch-name» # sometimes it stores something like: «ref: refs/heads/branch-name»
matchRef = match "^ref: (.*)$" fileContent; matchRef = match "^ref: (.*)$" fileContent;
in if isNull matchRef in if matchRef == null
then fileContent then fileContent
else readCommitFromFile (lib.head matchRef) path else readCommitFromFile (lib.head matchRef) path
# Sometimes, the file isn't there at all and has been packed away in the # Sometimes, the file isn't there at all and has been packed away in the
@ -92,7 +96,7 @@ rec {
then then
let fileContent = readFile packedRefsName; let fileContent = readFile packedRefsName;
matchRef = match (".*\n([^\n ]*) " + file + "\n.*") fileContent; matchRef = match (".*\n([^\n ]*) " + file + "\n.*") fileContent;
in if isNull matchRef in if matchRef == null
then throw ("Could not find " + file + " in " + packedRefsName) then throw ("Could not find " + file + " in " + packedRefsName)
else lib.head matchRef else lib.head matchRef
else throw ("Not a .git directory: " + path); else throw ("Not a .git directory: " + path);

View file

@ -90,7 +90,7 @@ rec {
/* Same as `concatMapStringsSep`, but the mapping function /* Same as `concatMapStringsSep`, but the mapping function
additionally receives the position of its argument. additionally receives the position of its argument.
Type: concatMapStringsSep :: string -> (int -> string -> string) -> [string] -> string Type: concatIMapStringsSep :: string -> (int -> string -> string) -> [string] -> string
Example: Example:
concatImapStringsSep "-" (pos: x: toString (x / pos)) [ 6 6 6 ] concatImapStringsSep "-" (pos: x: toString (x / pos)) [ 6 6 6 ]

View file

@ -3,7 +3,6 @@
rec { rec {
doubles = import ./doubles.nix { inherit lib; }; doubles = import ./doubles.nix { inherit lib; };
forMeta = import ./for-meta.nix { inherit lib; };
parse = import ./parse.nix { inherit lib; }; parse = import ./parse.nix { inherit lib; };
inspect = import ./inspect.nix { inherit lib; }; inspect = import ./inspect.nix { inherit lib; };
platforms = import ./platforms.nix { inherit lib; }; platforms = import ./platforms.nix { inherit lib; };
@ -15,7 +14,9 @@ rec {
# `parsed` is inferred from args, both because there are two options with one # `parsed` is inferred from args, both because there are two options with one
# clearly prefered, and to prevent cycles. A simpler fixed point where the RHS # clearly prefered, and to prevent cycles. A simpler fixed point where the RHS
# always just used `final.*` would fail on both counts. # always just used `final.*` would fail on both counts.
elaborate = args: let elaborate = args': let
args = if lib.isString args' then { system = args'; }
else args';
final = { final = {
# Prefer to parse `config` as it is strictly more informative. # Prefer to parse `config` as it is strictly more informative.
parsed = parse.mkSystemFromString (if args ? config then args.config else args.system); parsed = parse.mkSystemFromString (if args ? config then args.config else args.system);
@ -24,15 +25,20 @@ rec {
config = parse.tripleFromSystem final.parsed; config = parse.tripleFromSystem final.parsed;
# Just a guess, based on `system` # Just a guess, based on `system`
platform = platforms.selectBySystem final.system; platform = platforms.selectBySystem final.system;
# Determine whether we are compatible with the provided CPU
isCompatible = platform: parse.isCompatible final.parsed.cpu platform.parsed.cpu;
# Derived meta-data # Derived meta-data
libc = libc =
/**/ if final.isDarwin then "libSystem" /**/ if final.isDarwin then "libSystem"
else if final.isMinGW then "msvcrt" else if final.isMinGW then "msvcrt"
else if final.isWasi then "wasilibc"
else if final.isMusl then "musl" else if final.isMusl then "musl"
else if final.isUClibc then "uclibc" else if final.isUClibc then "uclibc"
else if final.isAndroid then "bionic" else if final.isAndroid then "bionic"
else if final.isLinux /* default */ then "glibc" else if final.isLinux /* default */ then "glibc"
else if final.isMsp430 then "newlib"
else if final.isAvr then "avrlibc" else if final.isAvr then "avrlibc"
else if final.isNetBSD then "nblibc"
# TODO(@Ericson2314) think more about other operating systems # TODO(@Ericson2314) think more about other operating systems
else "native/impure"; else "native/impure";
extensions = { extensions = {
@ -58,7 +64,7 @@ rec {
"netbsd" = "NetBSD"; "netbsd" = "NetBSD";
"freebsd" = "FreeBSD"; "freebsd" = "FreeBSD";
"openbsd" = "OpenBSD"; "openbsd" = "OpenBSD";
"wasm" = "Wasm"; "wasi" = "Wasi";
}.${final.parsed.kernel.name} or null; }.${final.parsed.kernel.name} or null;
# uname -p # uname -p
@ -68,16 +74,22 @@ rec {
release = null; release = null;
}; };
kernelArch =
if final.isAarch32 then "arm"
else if final.isAarch64 then "arm64"
else if final.isx86_32 then "x86"
else if final.isx86_64 then "ia64"
else final.parsed.cpu.name;
qemuArch = qemuArch =
if final.isArm then "arm" if final.isArm then "arm"
else if final.isx86_64 then "x86_64" else if final.isx86_64 then "x86_64"
else if final.isx86 then "i386" else if final.isx86 then "i386"
else { else {
"powerpc" = "ppc"; "powerpc" = "ppc";
"powerpcle" = "ppc";
"powerpc64" = "ppc64"; "powerpc64" = "ppc64";
"powerpc64le" = "ppc64"; "powerpc64le" = "ppc64le";
"mips64" = "mips";
"mipsel64" = "mipsel";
}.${final.parsed.cpu.name} or final.parsed.cpu.name; }.${final.parsed.cpu.name} or final.parsed.cpu.name;
emulator = pkgs: let emulator = pkgs: let
@ -98,13 +110,14 @@ rec {
wine = (pkgs.winePackagesFor wine-name).minimal; wine = (pkgs.winePackagesFor wine-name).minimal;
in in
if final.parsed.kernel.name == pkgs.stdenv.hostPlatform.parsed.kernel.name && if final.parsed.kernel.name == pkgs.stdenv.hostPlatform.parsed.kernel.name &&
(final.parsed.cpu.name == pkgs.stdenv.hostPlatform.parsed.cpu.name || pkgs.stdenv.hostPlatform.isCompatible final
(final.isi686 && pkgs.stdenv.hostPlatform.isx86_64)) then "${pkgs.runtimeShell} -c '\"$@\"' --"
then pkgs.runtimeShell
else if final.isWindows else if final.isWindows
then "${wine}/bin/${wine-name}" then "${wine}/bin/${wine-name}"
else if final.isLinux && pkgs.stdenv.hostPlatform.isLinux else if final.isLinux && pkgs.stdenv.hostPlatform.isLinux
then "${qemu-user}/bin/qemu-${final.qemuArch}" then "${qemu-user}/bin/qemu-${final.qemuArch}"
else if final.isWasi
then "${pkgs.wasmtime}/bin/wasmtime"
else throw "Don't know how to run ${final.config} executables."; else throw "Don't know how to run ${final.config} executables.";
} // mapAttrs (n: v: v final.parsed) inspect.predicates } // mapAttrs (n: v: v final.parsed) inspect.predicates

View file

@ -13,10 +13,20 @@ let
"i686-cygwin" "i686-freebsd" "i686-linux" "i686-netbsd" "i686-openbsd" "i686-cygwin" "i686-freebsd" "i686-linux" "i686-netbsd" "i686-openbsd"
"x86_64-cygwin" "x86_64-darwin" "x86_64-freebsd" "x86_64-linux" "x86_64-cygwin" "x86_64-freebsd" "x86_64-linux"
"x86_64-netbsd" "x86_64-openbsd" "x86_64-solaris" "x86_64-netbsd" "x86_64-openbsd" "x86_64-solaris"
"x86_64-darwin" "i686-darwin" "aarch64-darwin" "armv7a-darwin"
"x86_64-windows" "i686-windows" "x86_64-windows" "i686-windows"
"wasm64-wasi" "wasm32-wasi"
"powerpc64le-linux"
"riscv32-linux" "riscv64-linux"
"aarch64-none" "avr-none" "arm-none" "i686-none" "x86_64-none" "powerpc-none" "msp430-none" "riscv64-none" "riscv32-none"
]; ];
allParsed = map parse.mkSystemFromString all; allParsed = map parse.mkSystemFromString all;
@ -34,6 +44,7 @@ in rec {
i686 = filterDoubles predicates.isi686; i686 = filterDoubles predicates.isi686;
x86_64 = filterDoubles predicates.isx86_64; x86_64 = filterDoubles predicates.isx86_64;
mips = filterDoubles predicates.isMips; mips = filterDoubles predicates.isMips;
riscv = filterDoubles predicates.isRiscV;
cygwin = filterDoubles predicates.isCygwin; cygwin = filterDoubles predicates.isCygwin;
darwin = filterDoubles predicates.isDarwin; darwin = filterDoubles predicates.isDarwin;
@ -45,7 +56,10 @@ in rec {
netbsd = filterDoubles predicates.isNetBSD; netbsd = filterDoubles predicates.isNetBSD;
openbsd = filterDoubles predicates.isOpenBSD; openbsd = filterDoubles predicates.isOpenBSD;
unix = filterDoubles predicates.isUnix; unix = filterDoubles predicates.isUnix;
wasi = filterDoubles predicates.isWasi;
windows = filterDoubles predicates.isWindows; windows = filterDoubles predicates.isWindows;
mesaPlatforms = ["i686-linux" "x86_64-linux" "x86_64-darwin" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux" "powerpc64le-linux"]; embedded = filterDoubles predicates.isNone;
mesaPlatforms = ["i686-linux" "x86_64-linux" "x86_64-darwin" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "armv7a-linux" "aarch64-linux" "powerpc64le-linux"];
} }

View file

@ -44,14 +44,6 @@ rec {
platform = platforms.aarch64-multiplatform; platform = platforms.aarch64-multiplatform;
}; };
armv5te-android-prebuilt = rec {
config = "armv5tel-unknown-linux-androideabi";
sdkVer = "21";
ndkVer = "18b";
platform = platforms.armv5te-android;
useAndroidPrebuilt = true;
};
armv7a-android-prebuilt = rec { armv7a-android-prebuilt = rec {
config = "armv7a-unknown-linux-androideabi"; config = "armv7a-unknown-linux-androideabi";
sdkVer = "24"; sdkVer = "24";
@ -96,12 +88,32 @@ rec {
config = "aarch64-unknown-linux-musl"; config = "aarch64-unknown-linux-musl";
}; };
gnu64 = { config = "x86_64-unknown-linux-gnu"; };
gnu32 = { config = "i686-unknown-linux-gnu"; };
musl64 = { config = "x86_64-unknown-linux-musl"; }; musl64 = { config = "x86_64-unknown-linux-musl"; };
musl32 = { config = "i686-unknown-linux-musl"; }; musl32 = { config = "i686-unknown-linux-musl"; };
riscv64 = riscv "64"; riscv64 = riscv "64";
riscv32 = riscv "32"; riscv32 = riscv "32";
riscv64-embedded = {
config = "riscv64-none-elf";
libc = "newlib";
platform = platforms.riscv-multiplatform "64";
};
riscv32-embedded = {
config = "riscv32-none-elf";
libc = "newlib";
platform = platforms.riscv-multiplatform "32";
};
msp430 = {
config = "msp430-elf";
libc = "newlib";
};
avr = { avr = {
config = "avr"; config = "avr";
}; };
@ -135,11 +147,6 @@ rec {
libc = "newlib"; libc = "newlib";
}; };
alpha-embedded = {
config = "alpha-elf";
libc = "newlib";
};
i686-embedded = { i686-embedded = {
config = "i686-elf"; config = "i686-elf";
libc = "newlib"; libc = "newlib";
@ -213,6 +220,22 @@ rec {
platform = {}; platform = {};
}; };
# BSDs
amd64-netbsd = {
config = "x86_64-unknown-netbsd";
libc = "nblibc";
};
#
# WASM
#
wasi32 = {
config = "wasm32-unknown-wasi";
useLLVM = true;
};
# Ghcjs # Ghcjs
ghcjs = { ghcjs = {
config = "js-unknown-ghcjs"; config = "js-unknown-ghcjs";

View file

@ -1,37 +0,0 @@
{ lib }:
let
inherit (lib.systems) parse;
inherit (lib.systems.inspect) patterns;
abis = lib.mapAttrs (_: abi: builtins.removeAttrs abi [ "assertions" ]) parse.abis;
in rec {
all = [ {} ]; # `{}` matches anything
none = [];
arm = [ patterns.isAarch32 ];
aarch64 = [ patterns.isAarch64 ];
x86 = [ patterns.isx86 ];
i686 = [ patterns.isi686 ];
x86_64 = [ patterns.isx86_64 ];
mips = [ patterns.isMips ];
riscv = [ patterns.isRiscV ];
cygwin = [ patterns.isCygwin ];
darwin = [ patterns.isDarwin ];
freebsd = [ patterns.isFreeBSD ];
# Should be better, but MinGW is unclear.
gnu = [
{ kernel = parse.kernels.linux; abi = abis.gnu; }
{ kernel = parse.kernels.linux; abi = abis.gnueabi; }
{ kernel = parse.kernels.linux; abi = abis.gnueabihf; }
];
illumos = [ patterns.isSunOS ];
linux = [ patterns.isLinux ];
netbsd = [ patterns.isNetBSD ];
openbsd = [ patterns.isOpenBSD ];
unix = patterns.isUnix; # Actually a list
windows = [ patterns.isWindows ];
inherit (lib.systems.doubles) mesaPlatforms;
}

View file

@ -20,7 +20,9 @@ rec {
isRiscV = { cpu = { family = "riscv"; }; }; isRiscV = { cpu = { family = "riscv"; }; };
isSparc = { cpu = { family = "sparc"; }; }; isSparc = { cpu = { family = "sparc"; }; };
isWasm = { cpu = { family = "wasm"; }; }; isWasm = { cpu = { family = "wasm"; }; };
isMsp430 = { cpu = { family = "msp430"; }; };
isAvr = { cpu = { family = "avr"; }; }; isAvr = { cpu = { family = "avr"; }; };
isAlpha = { cpu = { family = "alpha"; }; };
is32bit = { cpu = { bits = 32; }; }; is32bit = { cpu = { bits = 32; }; };
is64bit = { cpu = { bits = 64; }; }; is64bit = { cpu = { bits = 64; }; };
@ -41,6 +43,8 @@ rec {
isWindows = { kernel = kernels.windows; }; isWindows = { kernel = kernels.windows; };
isCygwin = { kernel = kernels.windows; abi = abis.cygnus; }; isCygwin = { kernel = kernels.windows; abi = abis.cygnus; };
isMinGW = { kernel = kernels.windows; abi = abis.gnu; }; isMinGW = { kernel = kernels.windows; abi = abis.gnu; };
isWasi = { kernel = kernels.wasi; };
isNone = { kernel = kernels.none; };
isAndroid = [ { abi = abis.android; } { abi = abis.androideabi; } ]; isAndroid = [ { abi = abis.android; } { abi = abis.androideabi; } ];
isMusl = with abis; map (a: { abi = a; }) [ musl musleabi musleabihf ]; isMusl = with abis; map (a: { abi = a; }) [ musl musleabi musleabihf ];

View file

@ -69,24 +69,24 @@ rec {
cpuTypes = with significantBytes; setTypes types.openCpuType { cpuTypes = with significantBytes; setTypes types.openCpuType {
arm = { bits = 32; significantByte = littleEndian; family = "arm"; }; arm = { bits = 32; significantByte = littleEndian; family = "arm"; };
armv5tel = { bits = 32; significantByte = littleEndian; family = "arm"; version = "5"; }; armv5tel = { bits = 32; significantByte = littleEndian; family = "arm"; version = "5"; arch = "armv5t"; };
armv6m = { bits = 32; significantByte = littleEndian; family = "arm"; version = "6"; }; armv6m = { bits = 32; significantByte = littleEndian; family = "arm"; version = "6"; arch = "armv6-m"; };
armv6l = { bits = 32; significantByte = littleEndian; family = "arm"; version = "6"; }; armv6l = { bits = 32; significantByte = littleEndian; family = "arm"; version = "6"; arch = "armv6"; };
armv7a = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; }; armv7a = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; arch = "armv7-a"; };
armv7r = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; }; armv7r = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; arch = "armv7-r"; };
armv7m = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; }; armv7m = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; arch = "armv7-m"; };
armv7l = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; }; armv7l = { bits = 32; significantByte = littleEndian; family = "arm"; version = "7"; arch = "armv7"; };
armv8a = { bits = 32; significantByte = littleEndian; family = "arm"; version = "8"; }; armv8a = { bits = 32; significantByte = littleEndian; family = "arm"; version = "8"; arch = "armv8-a"; };
armv8r = { bits = 32; significantByte = littleEndian; family = "arm"; version = "8"; }; armv8r = { bits = 32; significantByte = littleEndian; family = "arm"; version = "8"; arch = "armv8-a"; };
armv8m = { bits = 32; significantByte = littleEndian; family = "arm"; version = "8"; }; armv8m = { bits = 32; significantByte = littleEndian; family = "arm"; version = "8"; arch = "armv8-m"; };
aarch64 = { bits = 64; significantByte = littleEndian; family = "arm"; version = "8"; }; aarch64 = { bits = 64; significantByte = littleEndian; family = "arm"; version = "8"; arch = "armv8-a"; };
aarch64_be = { bits = 64; significantByte = bigEndian; family = "arm"; version = "8"; }; aarch64_be = { bits = 64; significantByte = bigEndian; family = "arm"; version = "8"; arch = "armv8-a"; };
i386 = { bits = 32; significantByte = littleEndian; family = "x86"; }; i386 = { bits = 32; significantByte = littleEndian; family = "x86"; arch = "i386"; };
i486 = { bits = 32; significantByte = littleEndian; family = "x86"; }; i486 = { bits = 32; significantByte = littleEndian; family = "x86"; arch = "i486"; };
i586 = { bits = 32; significantByte = littleEndian; family = "x86"; }; i586 = { bits = 32; significantByte = littleEndian; family = "x86"; arch = "i586"; };
i686 = { bits = 32; significantByte = littleEndian; family = "x86"; }; i686 = { bits = 32; significantByte = littleEndian; family = "x86"; arch = "i686"; };
x86_64 = { bits = 64; significantByte = littleEndian; family = "x86"; }; x86_64 = { bits = 64; significantByte = littleEndian; family = "x86"; arch = "x86-64"; };
mips = { bits = 32; significantByte = bigEndian; family = "mips"; }; mips = { bits = 32; significantByte = bigEndian; family = "mips"; };
mipsel = { bits = 32; significantByte = littleEndian; family = "mips"; }; mipsel = { bits = 32; significantByte = littleEndian; family = "mips"; };
@ -109,11 +109,92 @@ rec {
alpha = { bits = 64; significantByte = littleEndian; family = "alpha"; }; alpha = { bits = 64; significantByte = littleEndian; family = "alpha"; };
msp430 = { bits = 16; significantByte = littleEndian; family = "msp430"; };
avr = { bits = 8; family = "avr"; }; avr = { bits = 8; family = "avr"; };
js = { bits = 32; significantByte = littleEndian; family = "js"; }; js = { bits = 32; significantByte = littleEndian; family = "js"; };
}; };
# Determine where two CPUs are compatible with each other. That is,
# can we run code built for system b on system a? For that to
# happen, then the set of all possible possible programs that system
# b accepts must be a subset of the set of all programs that system
# a accepts. This compatibility relation forms a category where each
# CPU is an object and each arrow from a to b represents
# compatibility. CPUs with multiple modes of Endianness are
# isomorphic while all CPUs are endomorphic because any program
# built for a CPU can run on that CPU.
isCompatible = a: b: with cpuTypes; lib.any lib.id [
# x86
(b == i386 && isCompatible a i486)
(b == i486 && isCompatible a i586)
(b == i586 && isCompatible a i686)
# XXX: Not true in some cases. Like in WSL mode.
(b == i686 && isCompatible a x86_64)
# ARMv4
(b == arm && isCompatible a armv5tel)
# ARMv5
(b == armv5tel && isCompatible a armv6l)
# ARMv6
(b == armv6l && isCompatible a armv6m)
(b == armv6m && isCompatible a armv7l)
# ARMv7
(b == armv7l && isCompatible a armv7a)
(b == armv7l && isCompatible a armv7r)
(b == armv7l && isCompatible a armv7m)
(b == armv7a && isCompatible a armv8a)
(b == armv7r && isCompatible a armv8a)
(b == armv7m && isCompatible a armv8a)
(b == armv7a && isCompatible a armv8r)
(b == armv7r && isCompatible a armv8r)
(b == armv7m && isCompatible a armv8r)
(b == armv7a && isCompatible a armv8m)
(b == armv7r && isCompatible a armv8m)
(b == armv7m && isCompatible a armv8m)
# ARMv8
(b == armv8r && isCompatible a armv8a)
(b == armv8m && isCompatible a armv8a)
# XXX: not always true! Some arm64 cpus dont support arm32 mode.
(b == aarch64 && a == armv8a)
(b == armv8a && isCompatible a aarch64)
(b == aarch64 && a == aarch64_be)
(b == aarch64_be && isCompatible a aarch64)
# PowerPC
(b == powerpc && isCompatible a powerpc64)
(b == powerpcle && isCompatible a powerpc)
(b == powerpc && a == powerpcle)
(b == powerpc64le && isCompatible a powerpc64)
(b == powerpc64 && a == powerpc64le)
# MIPS
(b == mips && isCompatible a mips64)
(b == mips && a == mipsel)
(b == mipsel && isCompatible a mips)
(b == mips64 && a == mips64el)
(b == mips64el && isCompatible a mips64)
# RISCV
(b == riscv32 && isCompatible a riscv64)
# SPARC
(b == sparc && isCompatible a sparc64)
# WASM
(b == wasm32 && isCompatible a wasm64)
# identity
(b == a)
];
################################################################################ ################################################################################
types.openVendor = mkOptionType { types.openVendor = mkOptionType {
@ -147,6 +228,7 @@ rec {
elf = {}; elf = {};
macho = {}; macho = {};
pe = {}; pe = {};
wasm = {};
unknown = {}; unknown = {};
}; };
@ -189,6 +271,7 @@ rec {
none = { execFormat = unknown; families = { }; }; none = { execFormat = unknown; families = { }; };
openbsd = { execFormat = elf; families = { inherit bsd; }; }; openbsd = { execFormat = elf; families = { inherit bsd; }; };
solaris = { execFormat = elf; families = { }; }; solaris = { execFormat = elf; families = { }; };
wasi = { execFormat = wasm; families = { }; };
windows = { execFormat = pe; families = { }; }; windows = { execFormat = pe; families = { }; };
ghcjs = { execFormat = unknown; families = { }; }; ghcjs = { execFormat = unknown; families = { }; };
} // { # aliases } // { # aliases
@ -298,6 +381,8 @@ rec {
then { cpu = elemAt l 0; kernel = elemAt l 1; abi = elemAt l 2; } then { cpu = elemAt l 0; kernel = elemAt l 1; abi = elemAt l 2; }
else if (elemAt l 2 == "mingw32") # autotools breaks on -gnu for window else if (elemAt l 2 == "mingw32") # autotools breaks on -gnu for window
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "windows"; } then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "windows"; }
else if (elemAt l 2 == "wasi")
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "wasi"; }
else if hasPrefix "netbsd" (elemAt l 2) else if hasPrefix "netbsd" (elemAt l 2)
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; } then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; }
else if (elem (elemAt l 2) ["eabi" "eabihf" "elf"]) else if (elem (elemAt l 2) ["eabi" "eabihf" "elf"])
@ -348,7 +433,7 @@ rec {
mkSystemFromString = s: mkSystemFromSkeleton (mkSkeletonFromList (lib.splitString "-" s)); mkSystemFromString = s: mkSystemFromSkeleton (mkSkeletonFromList (lib.splitString "-" s));
doubleFromSystem = { cpu, vendor, kernel, abi, ... }: doubleFromSystem = { cpu, kernel, abi, ... }:
/**/ if abi == abis.cygnus then "${cpu.name}-cygwin" /**/ if abi == abis.cygnus then "${cpu.name}-cygwin"
else if kernel.families ? darwin then "${cpu.name}-darwin" else if kernel.families ? darwin then "${cpu.name}-darwin"
else "${cpu.name}-${kernel.name}"; else "${cpu.name}-${kernel.name}";

View file

@ -253,22 +253,11 @@ rec {
kernelTarget = "zImage"; kernelTarget = "zImage";
}; };
# https://developer.android.com/ndk/guides/abis#armeabi
armv5te-android = {
name = "armeabi";
gcc = {
arch = "armv5te";
float = "soft";
float-abi = "soft";
};
};
# https://developer.android.com/ndk/guides/abis#v7a # https://developer.android.com/ndk/guides/abis#v7a
armv7a-android = { armv7a-android = {
name = "armeabi-v7a"; name = "armeabi-v7a";
gcc = { gcc = {
arch = "armv7-a"; arch = "armv7-a";
float = "hard";
float-abi = "softfp"; float-abi = "softfp";
fpu = "vfpv3-d16"; fpu = "vfpv3-d16";
}; };

View file

@ -71,6 +71,15 @@ checkConfigError 'The option value .* in .* is not of type.*positive integer.*'
checkConfigOutput "42" config.value ./declare-int-between-value.nix ./define-value-int-positive.nix checkConfigOutput "42" config.value ./declare-int-between-value.nix ./define-value-int-positive.nix
checkConfigError 'The option value .* in .* is not of type.*between.*-21 and 43.*inclusive.*' config.value ./declare-int-between-value.nix ./define-value-int-negative.nix checkConfigError 'The option value .* in .* is not of type.*between.*-21 and 43.*inclusive.*' config.value ./declare-int-between-value.nix ./define-value-int-negative.nix
# Check either types
# types.either
checkConfigOutput "42" config.value ./declare-either.nix ./define-value-int-positive.nix
checkConfigOutput "\"24\"" config.value ./declare-either.nix ./define-value-string.nix
# types.oneOf
checkConfigOutput "42" config.value ./declare-oneOf.nix ./define-value-int-positive.nix
checkConfigOutput "[ ]" config.value ./declare-oneOf.nix ./define-value-list.nix
checkConfigOutput "\"24\"" config.value ./declare-oneOf.nix ./define-value-string.nix
# Check mkForce without submodules. # Check mkForce without submodules.
set -- config.enable ./declare-enable.nix ./define-enable.nix set -- config.enable ./declare-enable.nix ./define-enable.nix
checkConfigOutput "true" "$@" checkConfigOutput "true" "$@"
@ -149,7 +158,7 @@ checkConfigOutput "1 2 3 4 5 6 7 8 9 10" config.result ./loaOf-with-long-list.ni
# Check loaOf with many merges of lists. # Check loaOf with many merges of lists.
checkConfigOutput "1 2 3 4 5 6 7 8 9 10" config.result ./loaOf-with-many-list-merges.nix checkConfigOutput "1 2 3 4 5 6 7 8 9 10" config.result ./loaOf-with-many-list-merges.nix
# Check mkAliasOptionModuleWithPriority. # Check mkAliasOptionModule.
checkConfigOutput "true" config.enable ./alias-with-priority.nix checkConfigOutput "true" config.enable ./alias-with-priority.nix
checkConfigOutput "true" config.enableAlias ./alias-with-priority.nix checkConfigOutput "true" config.enableAlias ./alias-with-priority.nix
checkConfigOutput "false" config.enable ./alias-with-priority-can-override.nix checkConfigOutput "false" config.enable ./alias-with-priority-can-override.nix

View file

@ -1,5 +1,8 @@
# This is a test to show that mkAliasOptionModule sets the priority correctly # This is a test to show that mkAliasOptionModule sets the priority correctly
# for aliased options. # for aliased options.
#
# This test shows that an alias with a high priority is able to override
# a non-aliased option.
{ config, lib, ... }: { config, lib, ... }:
@ -32,10 +35,10 @@ with lib;
imports = [ imports = [
# Create an alias for the "enable" option. # Create an alias for the "enable" option.
(mkAliasOptionModuleWithPriority [ "enableAlias" ] [ "enable" ]) (mkAliasOptionModule [ "enableAlias" ] [ "enable" ])
# Disable the aliased option, but with a default (low) priority so it # Disable the aliased option with a high priority so it
# should be able to be overridden by the next import. # should override the next import.
( { config, lib, ... }: ( { config, lib, ... }:
{ {
enableAlias = lib.mkForce false; enableAlias = lib.mkForce false;

View file

@ -1,5 +1,8 @@
# This is a test to show that mkAliasOptionModule sets the priority correctly # This is a test to show that mkAliasOptionModule sets the priority correctly
# for aliased options. # for aliased options.
#
# This test shows that an alias with a low priority is able to be overridden
# with a non-aliased option.
{ config, lib, ... }: { config, lib, ... }:
@ -32,7 +35,7 @@ with lib;
imports = [ imports = [
# Create an alias for the "enable" option. # Create an alias for the "enable" option.
(mkAliasOptionModuleWithPriority [ "enableAlias" ] [ "enable" ]) (mkAliasOptionModule [ "enableAlias" ] [ "enable" ])
# Disable the aliased option, but with a default (low) priority so it # Disable the aliased option, but with a default (low) priority so it
# should be able to be overridden by the next import. # should be able to be overridden by the next import.

View file

@ -0,0 +1,5 @@
{ lib, ... }: {
options.value = lib.mkOption {
type = lib.types.either lib.types.int lib.types.str;
};
}

View file

@ -0,0 +1,9 @@
{ lib, ... }: {
options.value = lib.mkOption {
type = lib.types.oneOf [
lib.types.int
(lib.types.listOf lib.types.int)
lib.types.str
];
};
}

View file

@ -1,11 +1,9 @@
{ pkgs ? import ((import ../.).cleanSource ../..) {} }: { pkgs ? import ((import ../.).cleanSource ../..) {} }:
pkgs.stdenv.mkDerivation { pkgs.runCommandNoCC "nixpkgs-lib-tests" {
name = "nixpkgs-lib-tests"; buildInputs = [ pkgs.nix (import ./check-eval.nix) ];
buildInputs = [ pkgs.nix ];
NIX_PATH="nixpkgs=${pkgs.path}"; NIX_PATH="nixpkgs=${pkgs.path}";
} ''
buildCommand = ''
datadir="${pkgs.nix}/share" datadir="${pkgs.nix}/share"
export TEST_ROOT=$(pwd)/test-tmp export TEST_ROOT=$(pwd)/test-tmp
export NIX_BUILD_HOOK= export NIX_BUILD_HOOK=
@ -22,10 +20,5 @@ pkgs.stdenv.mkDerivation {
cd ${pkgs.path}/lib/tests cd ${pkgs.path}/lib/tests
bash ./modules.sh bash ./modules.sh
[[ "$(nix-instantiate --eval --strict misc.nix)" == "[ ]" ]]
[[ "$(nix-instantiate --eval --strict systems.nix)" == "[ ]" ]]
touch $out touch $out
''; ''
}

View file

@ -12,19 +12,19 @@ let
expected = lib.sort lib.lessThan y; expected = lib.sort lib.lessThan y;
}; };
in with lib.systems.doubles; lib.runTests { in with lib.systems.doubles; lib.runTests {
testall = mseteq all (linux ++ darwin ++ freebsd ++ openbsd ++ netbsd ++ illumos ++ windows); testall = mseteq all (linux ++ darwin ++ freebsd ++ openbsd ++ netbsd ++ illumos ++ wasi ++ windows ++ embedded);
testarm = mseteq arm [ "armv5tel-linux" "armv6l-linux" "armv7l-linux" ]; testarm = mseteq arm [ "armv5tel-linux" "armv6l-linux" "armv7l-linux" "arm-none" "armv7a-darwin" ];
testi686 = mseteq i686 [ "i686-linux" "i686-freebsd" "i686-netbsd" "i686-openbsd" "i686-cygwin" "i686-windows" ]; testi686 = mseteq i686 [ "i686-linux" "i686-freebsd" "i686-netbsd" "i686-openbsd" "i686-cygwin" "i686-windows" "i686-none" "i686-darwin" ];
testmips = mseteq mips [ "mipsel-linux" ]; testmips = mseteq mips [ "mipsel-linux" ];
testx86_64 = mseteq x86_64 [ "x86_64-linux" "x86_64-darwin" "x86_64-freebsd" "x86_64-openbsd" "x86_64-netbsd" "x86_64-cygwin" "x86_64-solaris" "x86_64-windows" ]; testx86_64 = mseteq x86_64 [ "x86_64-linux" "x86_64-darwin" "x86_64-freebsd" "x86_64-openbsd" "x86_64-netbsd" "x86_64-cygwin" "x86_64-solaris" "x86_64-windows" "x86_64-none" ];
testcygwin = mseteq cygwin [ "i686-cygwin" "x86_64-cygwin" ]; testcygwin = mseteq cygwin [ "i686-cygwin" "x86_64-cygwin" ];
testdarwin = mseteq darwin [ "x86_64-darwin" ]; testdarwin = mseteq darwin [ "x86_64-darwin" "i686-darwin" "aarch64-darwin" "armv7a-darwin" ];
testfreebsd = mseteq freebsd [ "i686-freebsd" "x86_64-freebsd" ]; testfreebsd = mseteq freebsd [ "i686-freebsd" "x86_64-freebsd" ];
testgnu = mseteq gnu (linux /* ++ kfreebsd ++ ... */); testgnu = mseteq gnu (linux /* ++ kfreebsd ++ ... */);
testillumos = mseteq illumos [ "x86_64-solaris" ]; testillumos = mseteq illumos [ "x86_64-solaris" ];
testlinux = mseteq linux [ "i686-linux" "x86_64-linux" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "aarch64-linux" "mipsel-linux" ]; testlinux = mseteq linux [ "aarch64-linux" "armv5tel-linux" "armv6l-linux" "armv7l-linux" "i686-linux" "mipsel-linux" "riscv32-linux" "riscv64-linux" "x86_64-linux" "powerpc64le-linux" ];
testnetbsd = mseteq netbsd [ "i686-netbsd" "x86_64-netbsd" ]; testnetbsd = mseteq netbsd [ "i686-netbsd" "x86_64-netbsd" ];
testopenbsd = mseteq openbsd [ "i686-openbsd" "x86_64-openbsd" ]; testopenbsd = mseteq openbsd [ "i686-openbsd" "x86_64-openbsd" ];
testwindows = mseteq windows [ "i686-cygwin" "x86_64-cygwin" "i686-windows" "x86_64-windows" ]; testwindows = mseteq windows [ "i686-cygwin" "x86_64-cygwin" "i686-windows" "x86_64-windows" ];

View file

@ -112,7 +112,7 @@ rec {
# Function to call # Function to call
f: f:
# Argument to check for null before passing it to `f` # Argument to check for null before passing it to `f`
a: if isNull a then a else f a; a: if a == null then a else f a;
# Pull in some builtins not included elsewhere. # Pull in some builtins not included elsewhere.
inherit (builtins) inherit (builtins)
@ -134,7 +134,7 @@ rec {
On each release the first letter is bumped and a new animal is chosen On each release the first letter is bumped and a new animal is chosen
starting with that new letter. starting with that new letter.
*/ */
codeName = "Koi"; codeName = "Loris";
/* Returns the current nixpkgs version suffix as string. */ /* Returns the current nixpkgs version suffix as string. */
versionSuffix = versionSuffix =
@ -259,9 +259,10 @@ rec {
# TODO: figure out a clever way to integrate location information from # TODO: figure out a clever way to integrate location information from
# something like __unsafeGetAttrPos. # something like __unsafeGetAttrPos.
warn = msg: builtins.trace "WARNING: ${msg}"; warn = msg: builtins.trace "warning: ${msg}";
info = msg: builtins.trace "INFO: ${msg}"; info = msg: builtins.trace "INFO: ${msg}";
showWarnings = warnings: res: lib.fold (w: x: warn w x) res warnings;
## Function annotations ## Function annotations

View file

@ -111,7 +111,7 @@ rec {
name = "int"; name = "int";
description = "signed integer"; description = "signed integer";
check = isInt; check = isInt;
merge = mergeOneOption; merge = mergeEqualOption;
}; };
# Specialized subdomains of int # Specialized subdomains of int
@ -176,14 +176,14 @@ rec {
name = "float"; name = "float";
description = "floating point number"; description = "floating point number";
check = isFloat; check = isFloat;
merge = mergeOneOption; merge = mergeEqualOption;
}; };
str = mkOptionType { str = mkOptionType {
name = "str"; name = "str";
description = "string"; description = "string";
check = isString; check = isString;
merge = mergeOneOption; merge = mergeEqualOption;
}; };
strMatching = pattern: mkOptionType { strMatching = pattern: mkOptionType {
@ -217,7 +217,8 @@ rec {
# Deprecated; should not be used because it quietly concatenates # Deprecated; should not be used because it quietly concatenates
# strings, which is usually not what you want. # strings, which is usually not what you want.
string = separatedString ""; string = warn "types.string is deprecated because it quietly concatenates strings"
(separatedString "");
attrs = mkOptionType { attrs = mkOptionType {
name = "attrs"; name = "attrs";
@ -243,7 +244,7 @@ rec {
name = "path"; name = "path";
# Hacky: there is no isPath primop. # Hacky: there is no isPath primop.
check = x: builtins.substring 0 1 (toString x) == "/"; check = x: builtins.substring 0 1 (toString x) == "/";
merge = mergeOneOption; merge = mergeEqualOption;
}; };
# drop this in the future: # drop this in the future:
@ -415,7 +416,7 @@ rec {
name = "enum"; name = "enum";
description = "one of ${concatMapStringsSep ", " show values}"; description = "one of ${concatMapStringsSep ", " show values}";
check = flip elem values; check = flip elem values;
merge = mergeOneOption; merge = mergeEqualOption;
functor = (defaultFunctor name) // { payload = values; binOp = a: b: unique (a ++ b); }; functor = (defaultFunctor name) // { payload = values; binOp = a: b: unique (a ++ b); };
}; };
@ -443,6 +444,13 @@ rec {
functor = (defaultFunctor name) // { wrapped = [ t1 t2 ]; }; functor = (defaultFunctor name) // { wrapped = [ t1 t2 ]; };
}; };
# Any of the types in the given list
oneOf = ts:
let
head' = if ts == [] then throw "types.oneOf needs to get at least one type in its argument" else head ts;
tail' = tail ts;
in foldl' either head' tail';
# Either value of type `finalType` or `coercedType`, the latter is # Either value of type `finalType` or `coercedType`, the latter is
# converted to `finalType` using `coerceFunc`. # converted to `finalType` using `coerceFunc`.
coercedTo = coercedType: coerceFunc: finalType: coercedTo = coercedType: coerceFunc: finalType:
@ -469,8 +477,10 @@ rec {
# Obsolete alternative to configOf. It takes its option # Obsolete alternative to configOf. It takes its option
# declarations from the options attribute of containing option # declarations from the options attribute of containing option
# declaration. # declaration.
optionSet = builtins.throw "types.optionSet is deprecated; use types.submodule instead" "optionSet"; optionSet = mkOptionType {
name = builtins.trace "types.optionSet is deprecated; use types.submodule instead" "optionSet";
description = "option set";
};
# Augment the given type with an additional type check function. # Augment the given type with an additional type check function.
addCheck = elemType: check: elemType // { check = x: elemType.check x && check x; }; addCheck = elemType: check: elemType // { check = x: elemType.check x && check x; };

File diff suppressed because it is too large Load diff

View file

@ -14,12 +14,13 @@ fi
tmp=$(mktemp -d) tmp=$(mktemp -d)
pushd $tmp >/dev/null pushd $tmp >/dev/null
wget -nH -r -c --no-parent "${WGET_ARGS[@]}" >/dev/null wget -nH -r -c --no-parent "${WGET_ARGS[@]}" -A '*.tar.xz.sha256' -A '*.mirrorlist' >/dev/null
find -type f -name '*.mirrorlist' -delete
csv=$(mktemp) csv=$(mktemp)
find . -type f | while read src; do find . -type f | while read src; do
# Sanitize file name # Sanitize file name
filename=$(basename "$src" | tr '@' '_') filename=$(gawk '{ print $2 }' "$src" | tr '@' '_')
nameVersion="${filename%.tar.*}" nameVersion="${filename%.tar.*}"
name=$(echo "$nameVersion" | sed -e 's,-[[:digit:]].*,,' | sed -e 's,-opensource-src$,,' | sed -e 's,-everywhere-src$,,') name=$(echo "$nameVersion" | sed -e 's,-[[:digit:]].*,,' | sed -e 's,-opensource-src$,,' | sed -e 's,-everywhere-src$,,')
version=$(echo "$nameVersion" | sed -e 's,^\([[:alpha:]][[:alnum:]]*-\)\+,,') version=$(echo "$nameVersion" | sed -e 's,^\([[:alpha:]][[:alnum:]]*-\)\+,,')
@ -38,8 +39,8 @@ gawk -F , "{ print \$1 }" $csv | sort | uniq | while read name; do
latestVersion=$(echo "$versions" | sort -rV | head -n 1) latestVersion=$(echo "$versions" | sort -rV | head -n 1)
src=$(gawk -F , "/^$name,$latestVersion,/ { print \$3 }" $csv) src=$(gawk -F , "/^$name,$latestVersion,/ { print \$3 }" $csv)
filename=$(gawk -F , "/^$name,$latestVersion,/ { print \$4 }" $csv) filename=$(gawk -F , "/^$name,$latestVersion,/ { print \$4 }" $csv)
url="${src:2}" url="$(dirname "${src:2}")/$filename"
sha256=$(nix-hash --type sha256 --base32 --flat "$src") sha256=$(gawk '{ print $1 }' "$src")
cat >>"$SRCS" <<EOF cat >>"$SRCS" <<EOF
$name = { $name = {
version = "$latestVersion"; version = "$latestVersion";

View file

@ -1,32 +1,68 @@
ansicolors, # nix name, luarocks name, server, version,luaversion,maintainers
argparse, alt-getopt,,,,,arobyn
basexx, ansicolors,,,,,
cqueues argparse,,,,,
dkjson basexx,,,,,
fifo binaryheap,,,,,vcunat
inspect bit32,,,,lua5_1,lblasc
lgi busted,,,,,
lpeg_patterns cjson,lua-cjson,,,,
lpty compat53,,,,,vcunat
lrexlib-gnu, coxpcall,,,1.17.0-1,,
lrexlib-posix, cqueues,,,,,vcunat
ltermbox, cyrussasl,,,,,vcunat
lua-cmsgpack, digestif,,http://luarocks.org/dev,,lua5_3,
lua_cliargs, dkjson,,,,,
lua-iconv, fifo,,,,,
lua-term, http,,,,,vcunat
luabitop, inspect,,,,,
luaevent, ldoc,,,,,
luacheck lgi,,,,,
luaffi,http://luarocks.org/dev, ljsyscall,,,,lua5_1,lblasc
luuid, lpeg,,,,,vyp
penlight, lpeg_patterns,,,,,
say, lpeglabel,,,,,
luv, lpty,,,,,
luasystem, lrexlib-gnu,,,,,
mediator_lua,http://luarocks.org/manifests/teto lrexlib-pcre,,,,,vyp
mpack,http://luarocks.org/manifests/teto lrexlib-posix,,,,,
nvim-client,http://luarocks.org/manifests/teto ltermbox,,,,,
busted,http://luarocks.org/manifests/teto lua-cmsgpack,,,,,
luassert,http://luarocks.org/manifests/teto lua-iconv,,,,,
coxpcall,https://luarocks.org/manifests/hisham,1.17.0-1 lua-lsp,,http://luarocks.org/dev,,,
lua-messagepack,,,,,
lua-term,,,,,
lua-toml,,,,,
lua-zlib,,,,,koral
lua_cliargs,,,,,
luabitop,,,,,
luacheck,,,,,
luadbi,,,,,
luadbi-mysql,,,,,
luadbi-postgresql,,,,,
luadbi-sqlite3,,,,,
luaevent,,,,,
luaexpat,,,1.3.0-1,,arobyn flosse
luaffi,,http://luarocks.org/dev,,,
luafilesystem,,,1.7.0-2,,flosse vcunat
luaossl,,,,lua5_1,vcunat
luaposix,,,,,vyp lblasc
luasec,,,,,flosse
luasocket,,,,,
luasql-sqlite3,,,,,vyp
luassert,,,,,
luasystem,,,,,
luazip,,,,,
luuid,,,,,
luv,,,,,
markdown,,,,,
mediator_lua,,,,,
mpack,,,,,
moonscript,,,,,arobyn
nvim-client,,,,,
penlight,,,,,
rapidjson,,,,,
say,,,,,
std__debug,std._debug,,,,
std_normalize,std.normalize,,,,
stdlib,,,,,vyp

1 ansicolors, # nix name luarocks name server version luaversion maintainers
2 argparse, alt-getopt arobyn
3 basexx, ansicolors
4 cqueues argparse
5 dkjson basexx
6 fifo binaryheap vcunat
7 inspect bit32 lua5_1 lblasc
8 lgi busted
9 lpeg_patterns cjson lua-cjson
10 lpty compat53 vcunat
11 lrexlib-gnu, coxpcall 1.17.0-1
12 lrexlib-posix, cqueues vcunat
13 ltermbox, cyrussasl vcunat
14 lua-cmsgpack, digestif http://luarocks.org/dev lua5_3
15 lua_cliargs, dkjson
16 lua-iconv, fifo
17 lua-term, http vcunat
18 luabitop, inspect
19 luaevent, ldoc
20 luacheck lgi
21 luaffi,http://luarocks.org/dev, ljsyscall lua5_1 lblasc
22 luuid, lpeg vyp
23 penlight, lpeg_patterns
24 say, lpeglabel
25 luv, lpty
26 luasystem, lrexlib-gnu
27 mediator_lua,http://luarocks.org/manifests/teto lrexlib-pcre vyp
28 mpack,http://luarocks.org/manifests/teto lrexlib-posix
29 nvim-client,http://luarocks.org/manifests/teto ltermbox
30 busted,http://luarocks.org/manifests/teto lua-cmsgpack
31 luassert,http://luarocks.org/manifests/teto lua-iconv
32 coxpcall,https://luarocks.org/manifests/hisham,1.17.0-1 lua-lsp http://luarocks.org/dev
33 lua-messagepack
34 lua-term
35 lua-toml
36 lua-zlib koral
37 lua_cliargs
38 luabitop
39 luacheck
40 luadbi
41 luadbi-mysql
42 luadbi-postgresql
43 luadbi-sqlite3
44 luaevent
45 luaexpat 1.3.0-1 arobyn flosse
46 luaffi http://luarocks.org/dev
47 luafilesystem 1.7.0-2 flosse vcunat
48 luaossl lua5_1 vcunat
49 luaposix vyp lblasc
50 luasec flosse
51 luasocket
52 luasql-sqlite3 vyp
53 luassert
54 luasystem
55 luazip
56 luuid
57 luv
58 markdown
59 mediator_lua
60 mpack
61 moonscript arobyn
62 nvim-client
63 penlight
64 rapidjson
65 say
66 std__debug std._debug
67 std_normalize std.normalize
68 stdlib vyp

View file

@ -5,7 +5,7 @@ stdenv.mkDerivation {
buildInputs = [ makeWrapper perl perlPackages.XMLSimple ]; buildInputs = [ makeWrapper perl perlPackages.XMLSimple ];
unpackPhase = "true"; dontUnpack = true;
buildPhase = "true"; buildPhase = "true";
installPhase = installPhase =

View file

@ -0,0 +1,36 @@
#!/usr/bin/env bash
# script to generate `pkgs/networking/instant-messengers/discord/default.nix`
set -e
exec >${1:?usage: $0 <output-file>}
cat <<EOF
{ branch ? "stable", pkgs }:
let
inherit (pkgs) callPackage fetchurl;
in {
EOF
for branch in "" ptb canary; do
url=$(curl -sI "https://discordapp.com/api/download${branch:+/}${branch}?platform=linux&format=tar.gz" | grep -oP 'location: \K\S+')
version=${url##https://dl*.discordapp.net/apps/linux/}
version=${version%%/*.tar.gz}
echo " ${branch:-stable} = callPackage ./base.nix {"
echo " pname = \"discord${branch:+-}${branch}\";"
case $branch in
"") suffix="" ;;
ptb) suffix="PTB" ;;
canary) suffix="Canary" ;;
esac
echo " binaryName = \"Discord${suffix}\";"
echo " desktopName = \"Discord${suffix:+ }${suffix}\";"
echo " version = \"${version}\";"
echo " src = fetchurl {"
echo " url = \"${url}\";"
echo " sha256 = \"$(nix-prefetch-url "$url")\";"
echo " };"
echo " };"
done
echo "}.\${branch}"

View file

@ -1,5 +1,5 @@
#!/usr/bin/env nix-shell #!/usr/bin/env nix-shell
#!nix-shell -p nix-prefetch-scripts luarocks-nix -i bash #!nix-shell update-luarocks-shell.nix -i bash
# You'll likely want to use # You'll likely want to use
# `` # ``
@ -8,48 +8,52 @@
# to update all libraries in that folder. # to update all libraries in that folder.
# to debug, redirect stderr to stdout with 2>&1 # to debug, redirect stderr to stdout with 2>&1
# stop the script upon C-C # stop the script upon C-C
set -eu -o pipefail set -eu -o pipefail
if [ $# -lt 1 ]; then
print_help
exit 1
fi
CSV_FILE="maintainers/scripts/luarocks-packages.csv" CSV_FILE="maintainers/scripts/luarocks-packages.csv"
TMP_FILE="$(mktemp)" TMP_FILE="$(mktemp)"
# Set in the update-luarocks-shell.nix
NIXPKGS_PATH="$LUAROCKS_NIXPKGS_PATH"
exit_trap() # 10 is a pretty arbitrary number of simultaneous jobs, but it is generally
{ # impolite to hit a webserver with *too* many simultaneous connections :)
local lc="$BASH_COMMAND" rc=$? PARALLEL_JOBS=10
test $rc -eq 0 || echo -e "*** error $rc: $lc.\nGenerated temporary file in $TMP_FILE" >&2
exit_trap() {
local lc="$BASH_COMMAND" rc=$?
test $rc -eq 0 || echo -e "*** error $rc: $lc.\nGenerated temporary file in $TMP_FILE" >&2
} }
trap exit_trap EXIT
print_help() { print_help() {
echo "Usage: $0 <GENERATED_FILE>" echo "Usage: $0 <GENERATED_FILE>"
echo "(most likely pkgs/development/lua-modules/generated-packages.nix)" echo "(most likely pkgs/development/lua-modules/generated-packages.nix)"
echo "" echo ""
echo " -c <CSV_FILE> to set the list of luarocks package to generate" echo " -c <CSV_FILE> to set the list of luarocks package to generate"
exit 1 exit 1
} }
if [ $# -lt 1 ]; then
print_help
exit 1
fi
trap exit_trap EXIT
while getopts ":hc:" opt; do while getopts ":hc:" opt; do
case $opt in case $opt in
h) h)
print_help print_help
;; ;;
c) c)
echo "Loading package list from $OPTARG !" >&2 echo "Loading package list from $OPTARG !" >&2
CSV_FILE="$OPTARG" CSV_FILE="$OPTARG"
;; ;;
\?) \?)
echo "Invalid option: -$OPTARG" >&2 echo "Invalid option: -$OPTARG" >&2
;; ;;
esac esac
shift $((OPTIND-1)) shift $((OPTIND - 1))
done done
GENERATED_NIXFILE="$1" GENERATED_NIXFILE="$1"
@ -61,7 +65,7 @@ nixpkgs$ ${0} ${GENERATED_NIXFILE}
These packages are manually refined in lua-overrides.nix These packages are manually refined in lua-overrides.nix
*/ */
{ self, lua, stdenv, fetchurl, fetchgit, pkgs, ... } @ args: { self, stdenv, fetchurl, fetchgit, pkgs, ... } @ args:
self: super: self: super:
with self; with self;
{ {
@ -72,41 +76,60 @@ FOOTER="
/* GENERATED */ /* GENERATED */
" "
function convert_pkg() {
nix_pkg_name="$1"
lua_pkg_name="$2"
server="$3"
pkg_version="$4"
lua_version="$5"
maintainers="$6"
function convert_pkg () { if [ "${nix_pkg_name:0:1}" == "#" ]; then
pkg="$1" echo "Skipping comment ${*}" >&2
server="" return
if [ ! -z "$2" ]; then fi
server=" --server=$2" if [ -z "$lua_pkg_name" ]; then
fi echo "Using nix_name as lua_pkg_name for '$nix_pkg_name'" >&2
lua_pkg_name="$nix_pkg_name"
fi
version="${3:-}" echo "Building expression for $lua_pkg_name (version $pkg_version) from server [$server]" >&2
luarocks_args=(nix)
echo "looking at $pkg (version $version) from server [$server]" >&2 if [[ -n $server ]]; then
cmd="luarocks nix $server $pkg $version" luarocks_args+=("--only-server=$server")
drv="$($cmd)" fi
if [ $? -ne 0 ]; then if [[ -n $maintainers ]]; then
echo "Failed to convert $pkg" >&2 luarocks_args+=("--maintainers=$maintainers")
echo "$drv" >&2 fi
if [[ -n $lua_version ]]; then
lua_drv_path=$(nix-build --no-out-link "$NIXPKGS_PATH" -A "$lua_version")
luarocks_args+=("--lua-dir=$lua_drv_path/bin")
fi
luarocks_args+=("$lua_pkg_name")
if [[ -n $pkg_version ]]; then
luarocks_args+=("$pkg_version")
fi
echo "Running 'luarocks ${luarocks_args[*]}'" >&2
if drv="$nix_pkg_name = $(luarocks "${luarocks_args[@]}")"; then
echo "$drv"
else else
echo "$drv" | tee -a "$TMP_FILE" echo "Failed to convert $nix_pkg_name" >&2
return 1
fi fi
} }
# params needed when called via callPackage # params needed when called via callPackage
echo "$HEADER" | tee "$TMP_FILE" echo "$HEADER" | tee "$TMP_FILE"
# list of packages with format # Ensure parallel can run our bash function
# name,server,version export -f convert_pkg
while IFS=, read -r pkg_name server version export SHELL=bash
do # Read each line in the csv file and run convert_pkg for each, in parallel
if [ -z "$pkg_name" ]; then parallel --group --keep-order --halt now,fail=1 --jobs "$PARALLEL_JOBS" --colsep ',' convert_pkg {} <"$CSV_FILE" | tee -a "$TMP_FILE"
echo "Skipping empty package name" >&2
fi
convert_pkg "$pkg_name" "$server" "$version"
done < "$CSV_FILE"
# close the set # close the set
echo "$FOOTER" | tee -a "$TMP_FILE" echo "$FOOTER" | tee -a "$TMP_FILE"
cp "$TMP_FILE" "$GENERATED_NIXFILE" cp "$TMP_FILE" "$GENERATED_NIXFILE"
# vim: set ts=4 sw=4 ft=sh:

View file

@ -0,0 +1,9 @@
{ nixpkgs ? import ../.. { }
}:
with nixpkgs;
mkShell {
buildInputs = [
bash luarocks-nix nix-prefetch-scripts parallel
];
LUAROCKS_NIXPKGS_PATH = toString nixpkgs.path;
}

View file

@ -20,7 +20,9 @@ let
in in
[x] ++ nubOn f xs; [x] ++ nubOn f xs;
pkgs = import ./../../default.nix { }; pkgs = import ./../../default.nix {
overlays = [];
};
packagesWith = cond: return: set: packagesWith = cond: return: set:
nubOn (pkg: pkg.updateScript) nubOn (pkg: pkg.updateScript)
@ -67,9 +69,12 @@ let
let let
attrSet = pkgs.lib.attrByPath (pkgs.lib.splitString "." path) null pkgs; attrSet = pkgs.lib.attrByPath (pkgs.lib.splitString "." path) null pkgs;
in in
packagesWith (name: pkg: builtins.hasAttr "updateScript" pkg) if attrSet == null then
(name: pkg: pkg) builtins.throw "Attribute path `${path}` does not exists."
attrSet; else
packagesWith (name: pkg: builtins.hasAttr "updateScript" pkg)
(name: pkg: pkg)
attrSet;
packageByName = name: packageByName = name:
let let
@ -122,9 +127,17 @@ let
packageData = package: { packageData = package: {
name = package.name; name = package.name;
pname = (builtins.parseDrvName package.name).name; pname = (builtins.parseDrvName package.name).name;
updateScript = pkgs.lib.toList package.updateScript; updateScript = map builtins.toString (pkgs.lib.toList package.updateScript);
}; };
packagesJson = pkgs.writeText "packages.json" (builtins.toJSON (map packageData packages));
optionalArgs =
pkgs.lib.optional (max-workers != null) "--max-workers=${max-workers}"
++ pkgs.lib.optional (keep-going == "true") "--keep-going";
args = [ packagesJson ] ++ optionalArgs;
in pkgs.stdenv.mkDerivation { in pkgs.stdenv.mkDerivation {
name = "nixpkgs-update-script"; name = "nixpkgs-update-script";
buildCommand = '' buildCommand = ''
@ -139,6 +152,6 @@ in pkgs.stdenv.mkDerivation {
''; '';
shellHook = '' shellHook = ''
unset shellHook # do not contaminate nested shells unset shellHook # do not contaminate nested shells
exec ${pkgs.python3.interpreter} ${./update.py} ${pkgs.writeText "packages.json" (builtins.toJSON (map packageData packages))}${pkgs.lib.optionalString (max-workers != null) " --max-workers=${max-workers}"}${pkgs.lib.optionalString (keep-going == "true") " --keep-going"} exec ${pkgs.python3.interpreter} ${./update.py} ${builtins.concatStringsSep " " args}
''; '';
} }

View file

@ -6,13 +6,14 @@ debug: generated manual-combined.xml
manual-combined.xml: generated *.xml **/*.xml manual-combined.xml: generated *.xml **/*.xml
rm -f ./manual-combined.xml rm -f ./manual-combined.xml
nix-shell --packages xmloscopy \ nix-shell --pure -Q --packages xmloscopy \
--run "xmloscopy --docbook5 ./manual.xml ./manual-combined.xml" --run "xmloscopy --docbook5 ./manual.xml ./manual-combined.xml"
.PHONY: format .PHONY: format
format: format:
find ../../ -iname '*.xml' -type f -print0 | xargs -0 -I{} -n1 \ nix-shell --pure -Q --packages xmlformat \
xmlformat --config-file "../xmlformat.conf" -i {} --run "find ../../ -iname '*.xml' -type f -print0 | xargs -0 -I{} -n1 \
xmlformat --config-file '../xmlformat.conf' -i {}"
.PHONY: fix-misc-xml .PHONY: fix-misc-xml
fix-misc-xml: fix-misc-xml:

View file

@ -11,12 +11,12 @@
Nixs <emphasis>garbage collector</emphasis> to remove old, unreferenced Nixs <emphasis>garbage collector</emphasis> to remove old, unreferenced
packages. This is easy: packages. This is easy:
<screen> <screen>
$ nix-collect-garbage <prompt>$ </prompt>nix-collect-garbage
</screen> </screen>
Alternatively, you can use a systemd unit that does the same in the Alternatively, you can use a systemd unit that does the same in the
background: background:
<screen> <screen>
# systemctl start nix-gc.service <prompt># </prompt>systemctl start nix-gc.service
</screen> </screen>
You can tell NixOS in <filename>configuration.nix</filename> to run this unit You can tell NixOS in <filename>configuration.nix</filename> to run this unit
automatically at certain points in time, for instance, every night at 03:15: automatically at certain points in time, for instance, every night at 03:15:
@ -31,11 +31,11 @@ $ nix-collect-garbage
configurations. The following command deletes old roots, removing the ability configurations. The following command deletes old roots, removing the ability
to roll back to them: to roll back to them:
<screen> <screen>
$ nix-collect-garbage -d <prompt>$ </prompt>nix-collect-garbage -d
</screen> </screen>
You can also do this for specific profiles, e.g. You can also do this for specific profiles, e.g.
<screen> <screen>
$ nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old <prompt>$ </prompt>nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old
</screen> </screen>
Note that NixOS system configurations are stored in the profile Note that NixOS system configurations are stored in the profile
<filename>/nix/var/nix/profiles/system</filename>. <filename>/nix/var/nix/profiles/system</filename>.
@ -45,7 +45,7 @@ $ nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations o
Nix store) is to run Nixs store optimiser, which seeks out identical files Nix store) is to run Nixs store optimiser, which seeks out identical files
in the store and replaces them with hard links to a single copy. in the store and replaces them with hard links to a single copy.
<screen> <screen>
$ nix-store --optimise <prompt>$ </prompt>nix-store --optimise
</screen> </screen>
Since this command needs to read the entire Nix store, it can take quite a Since this command needs to read the entire Nix store, it can take quite a
while to finish. while to finish.

View file

@ -11,10 +11,10 @@
<literal>10.233.0.0/16</literal>. You can get the containers IPv4 address <literal>10.233.0.0/16</literal>. You can get the containers IPv4 address
as follows: as follows:
<screen> <screen>
# nixos-container show-ip foo <prompt># </prompt>nixos-container show-ip foo
10.233.4.2 10.233.4.2
$ ping -c1 10.233.4.2 <prompt>$ </prompt>ping -c1 10.233.4.2
64 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 ms 64 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 ms
</screen> </screen>
</para> </para>

View file

@ -16,7 +16,7 @@
<literal>systemd</literal> hierarchy, which is what systemd uses to keep <literal>systemd</literal> hierarchy, which is what systemd uses to keep
track of the processes belonging to each service or user session: track of the processes belonging to each service or user session:
<screen> <screen>
$ systemd-cgls <prompt>$ </prompt>systemd-cgls
├─user ├─user
│ └─eelco │ └─eelco
│ └─c1 │ └─c1

View file

@ -29,6 +29,13 @@
<xref linkend="opt-services.openssh.enable"/> = true; <xref linkend="opt-services.openssh.enable"/> = true;
<link linkend="opt-users.users._name__.openssh.authorizedKeys.keys">users.users.root.openssh.authorizedKeys.keys</link> = ["ssh-dss AAAAB3N…"]; <link linkend="opt-users.users._name__.openssh.authorizedKeys.keys">users.users.root.openssh.authorizedKeys.keys</link> = ["ssh-dss AAAAB3N…"];
' '
</screen>
By default the next free address in the <literal>10.233.0.0/16</literal> subnet will be chosen
as container IP. This behavior can be altered by setting <literal>--host-address</literal> and
<literal>--local-address</literal>:
<screen>
# nixos-container create test --config-file test-container.nix \
--local-address 10.235.1.2 --host-address 10.235.1.1
</screen> </screen>
</para> </para>

View file

@ -11,14 +11,14 @@
The command <literal>journalctl</literal> allows you to see the contents of The command <literal>journalctl</literal> allows you to see the contents of
the journal. For example, the journal. For example,
<screen> <screen>
$ journalctl -b <prompt>$ </prompt>journalctl -b
</screen> </screen>
shows all journal entries since the last reboot. (The output of shows all journal entries since the last reboot. (The output of
<command>journalctl</command> is piped into <command>less</command> by <command>journalctl</command> is piped into <command>less</command> by
default.) You can use various options and match operators to restrict output default.) You can use various options and match operators to restrict output
to messages of interest. For instance, to get all messages from PostgreSQL: to messages of interest. For instance, to get all messages from PostgreSQL:
<screen> <screen>
$ journalctl -u postgresql.service <prompt>$ </prompt>journalctl -u postgresql.service
-- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. -- -- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. --
... ...
Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG: database system is shut down Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG: database system is shut down
@ -29,7 +29,7 @@ Jan 07 15:45:13 hagbard postgres[2500]: [1-1] LOG: database system is ready to
Or to get all messages since the last reboot that have at least a Or to get all messages since the last reboot that have at least a
“critical” severity level: “critical” severity level:
<screen> <screen>
$ journalctl -b -p crit <prompt>$ </prompt>journalctl -b -p crit
Dec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice] Dec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice]
Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature above threshold, cpu clock throttled (total events = 1) Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature above threshold, cpu clock throttled (total events = 1)
</screen> </screen>

View file

@ -33,7 +33,7 @@
where <replaceable>N</replaceable> is the number of the NixOS system where <replaceable>N</replaceable> is the number of the NixOS system
configuration. To get a list of the available configurations, do: configuration. To get a list of the available configurations, do:
<screen> <screen>
$ ls -l /nix/var/nix/profiles/system-*-link <prompt>$ </prompt>ls -l /nix/var/nix/profiles/system-*-link
<replaceable>...</replaceable> <replaceable>...</replaceable>
lrwxrwxrwx 1 root root 78 Aug 12 13:54 /nix/var/nix/profiles/system-268-link -> /nix/store/202b...-nixos-13.07pre4932_5a676e4-4be1055 lrwxrwxrwx 1 root root 78 Aug 12 13:54 /nix/var/nix/profiles/system-268-link -> /nix/store/202b...-nixos-13.07pre4932_5a676e4-4be1055
</screen> </screen>

View file

@ -4,7 +4,7 @@
version="5.0" version="5.0"
xml:id="ch-running"> xml:id="ch-running">
<title>Administration</title> <title>Administration</title>
<partintro> <partintro xml:id="ch-running-intro">
<para> <para>
This chapter describes various aspects of managing a running NixOS system, This chapter describes various aspects of managing a running NixOS system,
such as how to use the <command>systemd</command> service manager. such as how to use the <command>systemd</command> service manager.

View file

@ -21,7 +21,7 @@
<command>systemd</command>. Without any arguments, it shows the status of <command>systemd</command>. Without any arguments, it shows the status of
active units: active units:
<screen> <screen>
$ systemctl <prompt>$ </prompt>systemctl
-.mount loaded active mounted / -.mount loaded active mounted /
swapfile.swap loaded active active /swapfile swapfile.swap loaded active active /swapfile
sshd.service loaded active running SSH Daemon sshd.service loaded active running SSH Daemon
@ -33,7 +33,7 @@ graphical.target loaded active active Graphical Interface
You can ask for detailed status information about a unit, for instance, the You can ask for detailed status information about a unit, for instance, the
PostgreSQL database service: PostgreSQL database service:
<screen> <screen>
$ systemctl status postgresql.service <prompt>$ </prompt>systemctl status postgresql.service
postgresql.service - PostgreSQL Server postgresql.service - PostgreSQL Server
Loaded: loaded (/nix/store/pn3q73mvh75gsrl8w7fdlfk3fq5qm5mw-unit/postgresql.service) Loaded: loaded (/nix/store/pn3q73mvh75gsrl8w7fdlfk3fq5qm5mw-unit/postgresql.service)
Active: active (running) since Mon, 2013-01-07 15:55:57 CET; 9h ago Active: active (running) since Mon, 2013-01-07 15:55:57 CET; 9h ago

View file

@ -18,7 +18,7 @@
If the corruption is in a path in the closure of the NixOS system If the corruption is in a path in the closure of the NixOS system
configuration, you can fix it by doing configuration, you can fix it by doing
<screen> <screen>
# nixos-rebuild switch --repair <prompt># </prompt>nixos-rebuild switch --repair
</screen> </screen>
This will cause Nix to check every path in the closure, and if its This will cause Nix to check every path in the closure, and if its
cryptographic hash differs from the hash recorded in Nixs database, the cryptographic hash differs from the hash recorded in Nixs database, the
@ -28,7 +28,7 @@
<para> <para>
You can also scan the entire Nix store for corrupt paths: You can also scan the entire Nix store for corrupt paths:
<screen> <screen>
# nix-store --verify --check-contents --repair <prompt># </prompt>nix-store --verify --check-contents --repair
</screen> </screen>
Any corrupt paths will be redownloaded if theyre available in a binary Any corrupt paths will be redownloaded if theyre available in a binary
cache; otherwise, they cannot be repaired. cache; otherwise, they cannot be repaired.

View file

@ -10,7 +10,7 @@
allows querying and manipulating user sessions. For instance, to list all allows querying and manipulating user sessions. For instance, to list all
user sessions: user sessions:
<screen> <screen>
$ loginctl <prompt>$ </prompt>loginctl
SESSION UID USER SEAT SESSION UID USER SEAT
c1 500 eelco seat0 c1 500 eelco seat0
c3 0 root seat0 c3 0 root seat0
@ -21,7 +21,7 @@ $ loginctl
devices attached to the system; usually, there is only one seat.) To get devices attached to the system; usually, there is only one seat.) To get
information about a session: information about a session:
<screen> <screen>
$ loginctl session-status c3 <prompt>$ </prompt>loginctl session-status c3
c3 - root (0) c3 - root (0)
Since: Tue, 2013-01-08 01:17:56 CET; 4min 42s ago Since: Tue, 2013-01-08 01:17:56 CET; 4min 42s ago
Leader: 2536 (login) Leader: 2536 (login)

Some files were not shown because too many files have changed in this diff Show more