Commit graph

250206 commits

Author SHA1 Message Date
Doron Behar b5c6505e63 treewide: Safer libsForQt5.callPackage
Intro:

Part of #101369: Every attribute from kdeApplications and kdeFrameworks
can be built with a few different qt5 versions. It's hard to tell the
difference between an application and a library and some applications
rely on inputs from kdeApplications and libsForQt5 alike.

Before this change, some applications that were defined with
`libsForQt5.callPackage` used libraries from the kde* sets compiled with
a specific qt5 version,

Due to `inherit (kde*) <lib or app>;` used in the widest scope, we had
issues with packages that depended on packages defined by this
`inherit`. This led to mismatched qt versions used in the same inputs,
or the inputs of inputs etc.

Hence, we added to all libsForQt5* sets, packages that will be used from
the correct libsForQt5 set, in accordance to the
`libsForQt5*.callPackage` used. All `inherit (kdeApplications) <pkgs>`
and similar inheritance was moved out of all-packages.nix to aliases.nix
only for backwards compatibility.

Now some KDE applications show up in the attribute sets `libsForQt5*`
which didn't show up there previously. This is sort of misleading, as
these are not necessary libraries, but they show up in the wider scope
thanks to them in aliases.nix. Hence it's the best that can be done
considering the circumstances and the urgency of the issue.
2020-10-30 20:32:19 +02:00
Matthew Bauer 989b403c7f
Merge pull request #96318 from matthewbauer/provide-patchelf-in-native-stdenv
stdenv/native: provide patchelf on linux
2020-10-30 13:32:13 -05:00
Doron Behar 2df527228f all-packages: Put all libsForQt5 near each other 2020-10-30 20:30:34 +02:00
Maximilian Bosch 460a30c15b
matrix-synapse: 1.22.0 -> 1.22.1
https://github.com/matrix-org/synapse/releases/tag/v1.22.1
2020-10-30 19:26:47 +01:00
ajs124 0126c86672 linuxstopmotion: 0.8.0 -> 0.8.5
and qt4 -> qt5
2020-10-30 19:26:36 +01:00
Doron Behar 62318eb816 kf5gpgmepp: Define in libsForQt5
Since it's a qt library, we can't guarantee every package using it as a
dependency will need it compiled with the same qt version, we put it in
all `libsForQt5*` scopes (#101369).
2020-10-30 20:26:30 +02:00
Doron Behar cb29abf3a9 all-packages.nix: Inherit kdesu in a different scope
Part of #101369: In order to avoid packages using the default `kdesu`
always built with qt515, we put it in scope only for packages defined
with a `libsForQt5`, to avoid incompatible qt versions used together in
inputs of a package.
2020-10-30 20:16:37 +02:00
Doron Behar d44460b52e kpmcore: Use kio directly - from the scope of libsForQt5 used
Make sure only compatible qt versions are used for it (#101369) -
kdeFrameworks is always using `libsForQt5` where e.g
`libsForQt514.kpmcore` would have otherwise used a kio compiled with
qt515.
2020-10-30 20:13:49 +02:00
Doron Behar c172f144f4 kdeApplications: Use latest qt515 by default
Fixes #100707 - Since KDE's maintainers don't allow non official kde
applications under the `kdeApplications` path, many packages that depend
on KDE libraries and applications are defined in all-packages.nix.
Overriding all kdeApplications to use qt514 while all other
`libsForQt5.callPackage` in all-packages.nix are using qt515, causes
collisions.
2020-10-30 20:13:49 +02:00
Maximilian Bosch 3a12f8588a
Merge pull request #102160 from WilliButz/update/grafana/v7.3.1
grafana: 7.3.0 -> 7.3.1
2020-10-30 19:05:04 +01:00
pablo1107 c268484e05 ledger2beancount: init at 2.1 2020-10-30 14:58:29 -03:00
Graham Christensen c851030763
amazon-image: random.trust_cpu=on to cut 10s from boot
Ubuntu and other distros already have this set via kernel config.
2020-10-30 13:45:19 -04:00
Maximilian Bosch 6928309c51
citrix_workspace: add 20.10.0
ChangeLog: https://docs.citrix.com/en-us/citrix-workspace-app-for-linux/whats-new.html#whats-new-in-2010
2020-10-30 18:36:21 +01:00
Maximilian Bosch 93a00bec3e
citrix_workspace: remove attributes for old versions; fix i686 build 2020-10-30 18:36:21 +01:00
Maximilian Bosch 2a9e33374b
up: 0.3.2 -> 0.4.0
https://github.com/akavel/up/releases/tag/v0.4
2020-10-30 18:36:21 +01:00
Matthew Kenigsberg b5faf8e4c6 amazon-glacier-cmd-interface: remove package
Last real upstream activity appears to be ~6 years ago and depends on
python2 packages that will soon be vulnerable

Helps #101964
2020-10-30 11:44:57 -05:00
StigP b52da4e641
Merge pull request #102071 from zakame/contrib/perl-NetAsyncWebSocket
perlPackages.NetAsyncWebSocket: init at 0.13
2020-10-30 16:43:05 +00:00
StigP 685c059737
Merge pull request #102066 from zakame/contrib/perl-NetAsyncHTTP
perlPackages.NetAsyncHTTP: init at 0.47
2020-10-30 16:40:22 +00:00
Graham Christensen 74a577b293
create-amis: improve wording around the service name's IAM role
Co-authored-by: Cole Helbling <cole.e.helbling@outlook.com>
2020-10-30 12:40:17 -04:00
Jonathan Ringer f24fa1e6f5 linuxPackages.usbip: disable < 4.10 2020-10-30 09:40:12 -07:00
Jonathan Ringer e6db435973 linux: add flavor metadata 2020-10-30 09:40:12 -07:00
Dmitry Kalinkin f89356b202
Merge pull request #102169 from cheriimoya/motion
motion 4.3.1 -> 4.3.2
2020-10-30 12:33:30 -04:00
Matthew Kenigsberg 3eb034db8c broadlink-cli: 0.9.0 -> 0.15.0 and use python3
Helps #101964
2020-10-30 09:18:34 -07:00
Graham Christensen ece5c0f304
stage-1: modprobe ext{2,3,4} before resizing
I noticed booting a system with an ext4 root which expanded to 5T took
quite a long time (12 minutes in some cases, 43(!) in others.)

I changed stage-1 to run `resize2fs -d 62` for extra debug output and
timing information. It revealed the adjust_superblock step taking
almost all of the time:

    [Fri Oct 30 11:10:15 UTC 2020] zero_high_bits_in_metadata: Memory used: 132k/0k (63k/70k), time:  0.00/ 0.00/ 0.00
    [Fri Oct 30 11:21:09 UTC 2020] adjust_superblock: Memory used: 396k/4556k (295k/102k), time: 654.21/ 0.59/ 5.13

but when I ran resize2fs on a disk with the identical content growing
to the identical target size, it would only take about 30 seconds. I
looked at what happened between those two steps in the fast case with
strace and found:

```
   235	getrusage(RUSAGE_SELF, {ru_utime={tv_sec=0, tv_usec=1795}, ru_stime={tv_sec=0, tv_usec=3590}, ...}) = 0
   236	write(1, "zero_high_bits_in_metadata: Memo"..., 84zero_high_bits_in_metadata: Memory used: 132k/0k (72k/61k), time:  0.00/ 0.00/ 0.00
   237	) = 84
   238	gettimeofday({tv_sec=1604061278, tv_usec=480147}, NULL) = 0
   239	getrusage(RUSAGE_SELF, {ru_utime={tv_sec=0, tv_usec=1802}, ru_stime={tv_sec=0, tv_usec=3603}, ...}) = 0
   240	gettimeofday({tv_sec=1604061278, tv_usec=480192}, NULL) = 0
   241	mmap(NULL, 2564096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fa3c7355000
   242	access("/sys/fs/ext4/features/lazy_itable_init", F_OK) = 0
   243	brk(0xf85000)                           = 0xf85000
   244	brk(0xfa6000)                           = 0xfa6000
   245	gettimeofday({tv_sec=1604061278, tv_usec=538828}, NULL) = 0
   246	getrusage(RUSAGE_SELF, {ru_utime={tv_sec=0, tv_usec=58720}, ru_stime={tv_sec=0, tv_usec=3603}, ...}) = 0
   247	write(1, "adjust_superblock: Memory used: "..., 79adjust_superblock: Memory used: 396k/2504k (305k/92k), time:  0.06/ 0.06/ 0.00
   248	) = 79
   249	gettimeofday({tv_sec=1604061278, tv_usec=539119}, NULL) = 0
   250	getrusage(RUSAGE_SELF, {ru_utime={tv_sec=0, tv_usec=58812}, ru_stime={tv_sec=0, tv_usec=3603}, ...}) = 0
   251	gettimeofday({tv_sec=1604061279, tv_usec=939}, NULL) = 0
   252	getrusage(RUSAGE_SELF, {ru_utime={tv_sec=0, tv_usec=520411}, ru_stime={tv_sec=0, tv_usec=3603}, ...}) = 0
   253	write(1, "fix_uninit_block_bitmaps 2: Memo"..., 88fix_uninit_block_bitmaps 2: Memory used: 396k/2504k (305k/92k), time:  0.46/ 0.46/ 0.00
   254	) = 88
```

In particular the access to /sys/fs seemed interesting. Looking
at the source of resize2fs:

```
[root@ip-172-31-22-182:~/e2fsprogs-1.45.5]# rg -B2 -A1 /sys/fs/ext4/features/lazy_itable_init .
./resize/resize2fs.c
923-	if (getenv("RESIZE2FS_FORCE_LAZY_ITABLE_INIT") ||
924-	    (!getenv("RESIZE2FS_FORCE_ITABLE_INIT") &&
925:	     access("/sys/fs/ext4/features/lazy_itable_init", F_OK) == 0))
926-		lazy_itable_init = 1;
```

I confirmed /sys is mounted, and then found a bug suggesting the
ext4 module is maybe not loaded:
https://bugzilla.redhat.com/show_bug.cgi?id=1071909

My home server doesn't have ext4 loaded and had 3T to play with, so
I tried (and succeeded with) replicating the issue locally:

```
[root@kif:/scratch]# lsmod | grep -i ext

[root@kif:/scratch]# zfs create -V 3G rpool/scratch/ext4

[root@kif:/scratch]# time mkfs.ext4 /dev/zvol/rpool/scratch/ext4
mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done
Creating filesystem with 786432 4k blocks and 196608 inodes
Filesystem UUID: 560a4a8f-93dc-40cc-97a5-f10049bf801f
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

real	0m2.261s
user	0m0.000s
sys	0m0.025s

[root@kif:/scratch]# zfs set volsize=3T rpool/scratch/ext4

[root@kif:/scratch]# time resize2fs -d 62 /dev/zvol/rpool/scratch/ext4
resize2fs 1.45.5 (07-Jan-2020)
fs has 11 inodes, 1 groups required.
fs requires 16390 data blocks.
With 1 group(s), we have 22234 blocks available.
Last group's overhead is 10534
Need 16390 data blocks in last group
Final size of last group is 26924
Estimated blocks needed: 26924
Extents safety margin: 49
Resizing the filesystem on /dev/zvol/rpool/scratch/ext4 to 805306368 (4k) blocks.
read_bitmaps: Memory used: 132k/0k (63k/70k), time:  0.00/ 0.00/ 0.00
read_bitmaps: I/O read: 1MB, write: 0MB, rate: 3802.28MB/s
fix_uninit_block_bitmaps 1: Memory used: 132k/0k (63k/70k), time:  0.00/ 0.00/ 0.00
resize_group_descriptors: Memory used: 132k/0k (68k/65k), time:  0.00/ 0.00/ 0.00
move_bg_metadata: Memory used: 132k/0k (68k/65k), time:  0.00/ 0.00/ 0.00
zero_high_bits_in_metadata: Memory used: 132k/0k (68k/65k), time:  0.00/ 0.00/ 0.00
```

here it got stuck for quite some time ... straceing this 20 minutes in revealed this in a tight loop:

```
getuid()                                = 0
geteuid()                               = 0
getgid()                                = 0
getegid()                               = 0
prctl(PR_GET_DUMPABLE)                  = 1 (SUID_DUMP_USER)
fallocate(3, FALLOC_FL_ZERO_RANGE, 2222649901056, 2097152) = 0
fsync(3)                                = 0
```

it finally ended 43(!) minutes later:

```
adjust_superblock: Memory used: 264k/3592k (210k/55k), time: 2554.03/ 0.16/15.07
fix_uninit_block_bitmaps 2: Memory used: 264k/3592k (210k/55k), time:  0.16/ 0.16/ 0.00
blocks_to_move: Memory used: 264k/3592k (211k/54k), time:  0.00/ 0.00/ 0.00
Number of free blocks: 755396/780023556, Needed: 0
block_mover: Memory used: 264k/3592k (216k/49k), time:  0.05/ 0.01/ 0.00
block_mover: I/O read: 1MB, write: 0MB, rate: 18.68MB/s
inode_scan_and_fix: Memory used: 264k/3592k (216k/49k), time:  0.00/ 0.00/ 0.00
inode_ref_fix: Memory used: 264k/3592k (216k/49k), time:  0.00/ 0.00/ 0.00
move_itables: Memory used: 264k/3592k (216k/49k), time:  0.00/ 0.00/ 0.00
calculate_summary_stats: Memory used: 264k/3592k (216k/49k), time: 16.35/16.35/ 0.00
fix_resize_inode: Memory used: 264k/3592k (222k/43k), time:  0.04/ 0.00/ 0.00
fix_resize_inode: I/O read: 1MB, write: 0MB, rate: 22.80MB/s
fix_sb_journal_backup: Memory used: 264k/3592k (222k/43k), time:  0.00/ 0.00/ 0.00
overall resize2fs: Memory used: 264k/3592k (222k/43k), time: 2570.90/16.68/15.07
overall resize2fs: I/O read: 1MB, write: 1MB, rate: 0.00MB/s
The filesystem on /dev/zvol/rpool/scratch/ext4 is now 805306368 (4k) blocks long.

real	43m1.943s
user	0m16.761s
sys	0m15.069s
```

I then cleaned up and recreated the zvol, loaded the ext4 module, created the ext4 fs,
resized the volume, and resize2fs'd and it went quite quickly:

```
[root@kif:/scratch]# zfs destroy rpool/scratch/ext4

[root@kif:/scratch]# zfs create -V 3G rpool/scratch/ext4

[root@kif:/scratch]# modprobe ext4

[root@kif:/scratch]# time resize2fs -d 62 /dev/zvol/rpool/scratch/ext4

[root@kif:/scratch]# time mkfs.ext4 /dev/zvol/rpool/scratch/ext4
mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done
Creating filesystem with 786432 4k blocks and 196608 inodes
Filesystem UUID: 5b415f2f-a8c4-4ba0-ac1d-78860de77610
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

real	0m1.013s
user	0m0.001s
sys	0m0.023s

[root@kif:/scratch]# zfs set volsize=3T rpool/scratch/ext4

[root@kif:/scratch]# time resize2fs -d 62 /dev/zvol/rpool/scratch/ext4
resize2fs 1.45.5 (07-Jan-2020)
fs has 11 inodes, 1 groups required.
fs requires 16390 data blocks.
With 1 group(s), we have 22234 blocks available.
Last group's overhead is 10534
Need 16390 data blocks in last group
Final size of last group is 26924
Estimated blocks needed: 26924
Extents safety margin: 49
Resizing the filesystem on /dev/zvol/rpool/scratch/ext4 to 805306368 (4k) blocks.
read_bitmaps: Memory used: 132k/0k (63k/70k), time:  0.00/ 0.00/ 0.00
read_bitmaps: I/O read: 1MB, write: 0MB, rate: 3389.83MB/s
fix_uninit_block_bitmaps 1: Memory used: 132k/0k (63k/70k), time:  0.00/ 0.00/ 0.00
resize_group_descriptors: Memory used: 132k/0k (68k/65k), time:  0.00/ 0.00/ 0.00
move_bg_metadata: Memory used: 132k/0k (68k/65k), time:  0.00/ 0.00/ 0.00
zero_high_bits_in_metadata: Memory used: 132k/0k (68k/65k), time:  0.00/ 0.00/ 0.00
adjust_superblock: Memory used: 264k/1540k (210k/55k), time:  0.02/ 0.02/ 0.00
fix_uninit_block_bitmaps 2: Memory used: 264k/1540k (210k/55k), time:  0.15/ 0.15/ 0.00
blocks_to_move: Memory used: 264k/1540k (211k/54k), time:  0.00/ 0.00/ 0.00
Number of free blocks: 755396/780023556, Needed: 0
block_mover: Memory used: 264k/3592k (216k/49k), time:  0.01/ 0.01/ 0.00
block_mover: I/O read: 1MB, write: 0MB, rate: 157.11MB/s
inode_scan_and_fix: Memory used: 264k/3592k (216k/49k), time:  0.00/ 0.00/ 0.00
inode_ref_fix: Memory used: 264k/3592k (216k/49k), time:  0.00/ 0.00/ 0.00
move_itables: Memory used: 264k/3592k (216k/49k), time:  0.00/ 0.00/ 0.00

calculate_summary_stats: Memory used: 264k/3592k (216k/49k), time: 16.20/16.20/ 0.00
fix_resize_inode: Memory used: 264k/3592k (222k/43k), time:  0.00/ 0.00/ 0.00
fix_resize_inode: I/O read: 1MB, write: 0MB, rate: 5319.15MB/s
fix_sb_journal_backup: Memory used: 264k/3592k (222k/43k), time:  0.00/ 0.00/ 0.00
overall resize2fs: Memory used: 264k/3592k (222k/43k), time: 16.45/16.38/ 0.00
overall resize2fs: I/O read: 1MB, write: 1MB, rate: 0.06MB/s
The filesystem on /dev/zvol/rpool/scratch/ext4 is now 805306368 (4k) blocks long.

real	0m17.908s
user	0m16.386s
sys	0m0.079s
```

Success!
2020-10-30 12:18:23 -04:00
Graham Christensen a179781696
stage-1: add datestamps to logs
When the stage-1 logs get imported in to the journal, they all get
loaded with the same timestamp. This makes it difficult to identify
what might be taking a long time in early boot.
2020-10-30 12:16:35 -04:00
Max Hausch 45d88250f2
motion 4.3.1 -> 4.3.2 2020-10-30 17:12:37 +01:00
Graham Christensen 2bf1fc0345
create-amis: allow customizing the service role name
The complete setup on the AWS end can be configured
with the following Terraform configuration. It generates
a ./credentials.sh which I just copy/pasted in to the
create-amis.sh script near the top. Note: the entire stack
of users and bucket can be destroyed at the end of the
import.

    variable "region" {
      type = string
    }
    variable "availability_zone" {
      type = string
    }

    provider "aws" {
      region = var.region
    }

    resource "aws_s3_bucket" "nixos-amis" {
      bucket_prefix = "nixos-amis-"
      lifecycle_rule {
        enabled = true
        abort_incomplete_multipart_upload_days = 1
        expiration {
          days = 7
        }
      }
    }

    resource "local_file" "credential-file" {
      file_permission = "0700"
      filename = "${path.module}/credentials.sh"
      sensitive_content = <<SCRIPT
    export service_role_name="${aws_iam_role.vmimport.name}"
    export bucket="${aws_s3_bucket.nixos-amis.bucket}"
    export AWS_ACCESS_KEY_ID="${aws_iam_access_key.uploader.id}"
    export AWS_SECRET_ACCESS_KEY="${aws_iam_access_key.uploader.secret}"
    SCRIPT
    }

    # The following resources are for the *uploader*
    resource "aws_iam_user" "uploader" {
      name = "nixos-amis-uploader"
    }

    resource "aws_iam_access_key" "uploader" {
      user = aws_iam_user.uploader.name
    }

    resource "aws_iam_user_policy" "upload-to-nixos-amis" {
      user = aws_iam_user.uploader.name

      policy = data.aws_iam_policy_document.upload-policy-document.json
    }

    data "aws_iam_policy_document" "upload-policy-document" {
      statement {
        effect = "Allow"

        actions = [
          "s3:ListBucket",
          "s3:GetBucketLocation",
        ]

        resources = [
          aws_s3_bucket.nixos-amis.arn
        ]
      }

      statement {
        effect = "Allow"

        actions = [
          "s3:PutObject",
          "s3:GetObject",
          "s3:DeleteObject",
        ]

        resources = [
          "${aws_s3_bucket.nixos-amis.arn}/*"
        ]
      }

      statement {
        effect = "Allow"
        actions = [
          "ec2:ImportSnapshot",
          "ec2:DescribeImportSnapshotTasks",
          "ec2:DescribeImportSnapshotTasks",
          "ec2:RegisterImage",
          "ec2:DescribeImages"
        ]
        resources = [
          "*"
        ]
      }
    }

    # The following resources are for the *vmimport service user*
    # See: https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#vmimport-role
    resource "aws_iam_role" "vmimport" {
      assume_role_policy = data.aws_iam_policy_document.vmimport-trust.json
    }

    resource "aws_iam_role_policy" "vmimport-access" {
      role = aws_iam_role.vmimport.id
      policy = data.aws_iam_policy_document.vmimport-access.json
    }

    data "aws_iam_policy_document" "vmimport-access" {
      statement {
        effect = "Allow"
        actions = [
          "s3:GetBucketLocation",
          "s3:GetObject",
          "s3:ListBucket",
        ]
        resources = [
          aws_s3_bucket.nixos-amis.arn,
          "${aws_s3_bucket.nixos-amis.arn}/*"
        ]
      }
      statement {
        effect = "Allow"
        actions = [
          "ec2:ModifySnapshotAttribute",
          "ec2:CopySnapshot",
          "ec2:RegisterImage",
          "ec2:Describe*"
        ]
        resources = [
          "*"
        ]
      }
    }

    data "aws_iam_policy_document" "vmimport-trust" {
      statement {
        effect = "Allow"
        principals {
          type = "Service"
          identifiers = [ "vmie.amazonaws.com" ]
        }

        actions = [
          "sts:AssumeRole"
        ]

        condition {
          test = "StringEquals"
          variable = "sts:ExternalId"
          values = [ "vmimport" ]
        }
      }
    }
2020-10-30 12:12:08 -04:00
rnhmjoj e79ac7b3d6
bdf2psf: 1.197 -> 1.198 2020-10-30 17:08:35 +01:00
Graham Christensen e253de8a77
create-amis.sh: log the full response if describing the import snapshot tasks fails 2020-10-30 12:08:01 -04:00
Graham Christensen f92a883ddb
nixos ec2/create-amis.sh: shellcheck: $ is not needed in arithmetic 2020-10-30 12:08:01 -04:00
Graham Christensen 7dac8470cf
nixos ec2/create-amis.sh: shellcheck: explicitly make the additions to block_device_mappings single strings 2020-10-30 12:08:00 -04:00
Graham Christensen a66a22ca54
nixos ec2/create-amis.sh: shellcheck: read without -r mangles backslashes 2020-10-30 12:08:00 -04:00
Graham Christensen baf7ed3f24
nixos ec2/create-amis.sh: shellcheck: SC2155: Declare and assign separately to avoid masking return values. 2020-10-30 12:07:59 -04:00
Graham Christensen f5994c208d
nixos ec2/create-amis.sh: shellcheck: quote state_dir reference 2020-10-30 12:07:59 -04:00
Graham Christensen c76692192a
nixos ec2/create-amis.sh: shellcheck: quote region references 2020-10-30 12:07:49 -04:00
Marek Mahut 242441fea2
Merge pull request #101847 from mmilata/python-trezor-udev-linux
python3Packages.trezor: make udev rules dependency linux-only
2020-10-30 17:00:09 +01:00
Jörg Thalheim 46731b8886
Merge pull request #100814 from unode/samtools 2020-10-30 16:44:27 +01:00
Daniël de Kok 44177770f3
Merge pull request #102089 from r-ryantm/auto-update/1password
_1password-gui: 0.9.0 -> 0.9.1
2020-10-30 16:39:08 +01:00
Jörg Thalheim 6793ec82ca
Merge pull request #101948 from matthew-piziak/tdlib-169 2020-10-30 16:34:47 +01:00
Zak B. Elep 52c05c8791 perlPackages.NetAsyncWebSocket: init at 0.13 2020-10-30 23:17:17 +08:00
Peter Hoeg 05d95cfe79 kdeconnect: avoid double-wrapping the binary 2020-10-30 22:34:02 +08:00
Peter Hoeg dfd29f9d7c zanshin: broken before the 20.08.2 upgrade 2020-10-30 22:34:02 +08:00
Peter Hoeg 0d25246f4d kdeconnect: part of kdeApplications 2020-10-30 22:34:02 +08:00
Peter Hoeg d87b88361a okular: add missing dependency 2020-10-30 22:34:02 +08:00
Peter Hoeg 7ac898fec2 kdeApplications: 20.08.1 -> 20.08.2 2020-10-30 22:34:02 +08:00
WilliButz a3c16e973a
Merge pull request #102094 from Frostman/blackbox-exporter-0.18.0
blackbox-exporter: 0.17.0 -> 0.18.0
2020-10-30 15:11:58 +01:00
Zak B. Elep 489c73671a perlPackages.NetAsyncHTTP: init at 0.47 2020-10-30 22:11:30 +08:00
Mario Rodas e250fef768
Merge pull request #99920 from ericdallo/add-dart-to-flutter
flutter: Bump and add dart cache to flutter
2020-10-30 08:56:54 -05:00
Pierre Bourdon ee36b1cd5b plover.dev: fix Qt version pinning
Issue report: https://github.com/NixOS/nixpkgs/issues/65399#issuecomment-719066888

Similar issues in #98067.

Plover seems to work fine with Qt > 5.14 so this is an easy way to fix
the problem (as opposed to keeping the pinning and making it work with
PyQt).
2020-10-30 14:55:16 +01:00
WilliButz 207804705d
grafana: 7.3.0 -> 7.3.1
https://github.com/grafana/grafana/releases/tag/v7.3.1
2020-10-30 14:53:59 +01:00