-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathmyfile.xml
7124 lines (5653 loc) · 547 KB
/
myfile.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title type="text">Fedora Planet</title>
<id>http://fedoraplanet.org/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/" />
<link href="http://fedoraplanet.org/" rel="self" />
<author>
<name></name>
</author>
<subtitle type="text">http://fedoraplanet.org/</subtitle>
<generator>PyAtom</generator>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Mumble ready for testing</title>
<id>http://fedoraplanet.org/2/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/2/" />
<author>
<name>Justin W. Flory</name>
</author>
<content type="html/text">Mumble is back in Fedora
Mumble, a free and open-source VoIP program</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Understanding the kernel release cycle</title>
<id>http://fedoraplanet.org/3/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/3/" />
<author>
<name>Josh Boyer</name>
</author>
<content type="html/text">A few days ago, a Fedora community member was asking if there was a 4.3 kernel for F23 yet (there isn't). When pressed for why, it turns out they were asking for someone that wanted newer support but thought a 4.4-rc3 kernel was too new. This was surprising to me. The assumption was being made that 4.4-rc3 was unstable or dangerous, but that 4.3 was just fine even though Fedora hasn't updated the release branches with it yet. This led me to ponder the upstream kernel release cycle a bit, and I thought I would offer some insights as to why the version number might not represent what most people think it does.First, I will start by saying that the upstream kernel development process is amazing. The rate of change for the 4.3 kernel was around 8 patches per hour, by 1600 developers, for a total of 12,131 changes over 63 days total[1]. And that is considered a fairly calm release by kernel standards. The fact that the community continues to churn out kernels of such quality with that rate of change is very very impressive. There is actually quite a bit of background coordination that goes on between the subsystem maintainers, but I'm going to focus on how Linus' releases are formed for the sake of simplicity for now.A kernel release is broken into a set of discrete, roughly time-based chunks. The first chunk is the 2 week merge window. This is the timeframe where the subsystem maintainers send the majority of the new changes for that release to Linus. He takes them in via git pull requests, grumbles about a fair number of them, refuses a few others. Most of the pull requests are dealt with in the first week, but there are always a few late ones so Linus waits the two weeks and then "closes" the window. This culminates in the first RC release being cut.From that point on, the focus for the release is fixes. New code being taken at this point is fairly rare, but does happen in the early -rc releases. These are cut roughly every Sunday evening, making for a one week timeframe per -rc. Each -rc release tends to be smaller in new changesets than the previous, as the community becomes much more picky on what is acceptable the longer the release has gone on. Typically it gets to -rc7, but occasionally it will go to -rc8. One week after -rc7 is released, the "final" release is cut, which maps nicely with the 63 day timeframe quoted above.Now, here is where people start getting confused. They see a "final" release and immediately assume it's stable. It's not. There are bugs. Lots and lots of bugs. So why would Linus release a kernel with a lot of bugs? Because finding them all is an economy of scale. Let's step back a second into the development and see why.During the development cycle, people are constantly testing things. However, not everyone is testing the same thing. Each subsystem maintainer is often testing their own git tree for their specific subsystem. At the same time, they've opened their subsystem trees for changes for the next version of the kernel, not the one still in -rcX state. So they have new code coming in before the current code is even released. This is how they sustain that massive rate of change.Aside from subsystem trees, there is the linux-next tree. This is daily merge of all the subsystem maintainer trees that have already opened up to new code on top of whatever Linus has in his tree. A number of people are continually testing linux-next, mostly through automated bots but also in VMs and running fuzzers and such. In theory and in practice, this catches bugs before they get to Linus the next round. But it is complicated because the rate of change means that if an issue is hit, it's hard to see if it's in the new new code only found in linux-next, or if it's actually in Linus' tree. Determining that usually winds up being a manual process via git-bisect, but sometimes the testing bots can determine the offending commit in an automated fashion.If a bug is found, the subsystem maintainer or patch author or whomever must track which tree the bug is, whether it's a small enough fix to go into whatever -rcX state Linus' tree is in, and how to get it there. This is very much a manual process, and often involves multiple humans. Given that humans are terrible at multitasking in general, and grow ever more cautious the later the -rcX state is, sometimes fixes are missed or simply queued for the next merge window. That's not to say important bugs are not fixed, because clearly there are several weeks of -rcX releases where fixing is the primary concern. However, with all the moving parts, you're never going to find all the bugs in time.In addition to the rate of change/forest of trees issue, there's also the diversity and size of the tester pool. Most of the bots test via VMs. VMs are wonderful tools, but they don't test the majority of the drivers in the kernel. The kernel developers themselves tend to have higher end laptops. Traditionally this was of the Thinkpad variety and a fair number of those are still seen, but there is some variance here now which is good. But it isn't good enough to cover all possible firmware, device, memory, and workload combinations. There are other testers to be sure, but they only cover a tiny fraction of the end user machines.It isn't hard to see how bugs slip through, particularly in drivers or on previous generation hardware. I wouldn't even call it a problem really. No software project is going to cut a release with 0 bugs in it. It simply doesn't happen. The kernel is actually fairly high quality at release time in spite of this. However, as I said earlier, people tend to make assumptions and think it's good enough for daily use on whatever hardware they have. Then they're surprised when it might not be.To combat this problem, we have the upstream stable trees. These trees backport fixes from the current development kernels that also apply to the already released kernels. Hence, 4.3.1 is Linus' 4.3 release, plus a number of fixes that were found "too late". This, in my opinion, is where the bulk of the work on making a kernel usable happens. It is also somewhat surprising when you look at it.The first stable release of a kernel, a .1 release, is actually very large. It is often comprised of 100-200 individual changes that are being backported from the current development kernel. That means there are 100-200 bugs immediately being fixed there. Whew, that's a lot but OK maybe expected with everything above taken into account. Except the .2 release is often also 100-200 patches. And .3. And often .4. It isn't until you start getting into .5, .6, .7, etc that the patch count starts getting smaller. By the .9 release, it's usually time to retire the whole tree (unless it's a long-term stable) and start the fun all over again.In dealing with the Fedora kernel, the maintainers take all of this into account. This is why it is very rare to see us push a 4.x.0 kernel to a stable release, and often it isn't until .2 that you see a build. For those thinking that this article is somehow deriding the upstream kernel development process, I hope you now realize the opposite is true. We rely heavily on upstream following through and tagging and fixing the issues it finds, either while under development or via the stable process. We help the stable process as well by reporting back fixes if they aren't already found.So hopefully next time you're itching for a new kernel just because it's been released upstream, you'll pause and think about this. And if you really want to help, you'll grab a rawhide kernel and pitch in by reporting any issues when you find them. The only way to get the stable kernel releases smaller, and reduce the number of bugs still found in freshly released kernels, is to broaden the tester pool and let the upstream developers know as soon as possible. In this way, we're all part of the upstream kernel community and we can all keep making it awesome and impressive.(4.3.y will likely be coming to F23 the first week of January. Greg-KH seems to have gone on some kind of walkabout the past few weeks so 4.3.1 hasn't been released yet. To be honest, it's a break well deserved. Or maybe he found 4.3 to be even more buggy as usual. Who knows!)[1] https://lwn.net/Articles/661978/ (Of course I was going to link to lwn.net. If you aren't already subscribed to it, you really should be. They have amazing articles and technical content. They make my stuff look like junk even more than it already is. I'm kinda jealous at the energy and expertise they show in their writing.)</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Git, binary files, and patches</title>
<id>http://fedoraplanet.org/4/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/4/" />
<author>
<name>Laura Abbott</name>
</author>
<content type="html/text">$ mkdir test_repo
$ cd test_repo/
$ git init
Initialized empty Git repository in /home/labbott/test_repo/.git/
$ touch foo
$ git add foo
$ git commit -m "this file"
[ master (root-commit) c51ba67] this file
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 foo
$ cp ~/a.out .
$ file a.out
a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter
/lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32,
BuildID[sha1]=96d6131eb203b850d42b53a7f5ebc512056ec739, not stripped
$ git add a.out
$ git commit -m "A binary file"
[master d5bdd17] A binary file
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100755 a.out
$ git rm a.out
rm 'a.out'
$ git commit -m "no binary"
[master 9dd9f2b] no binary
1 file changed, 0 insertions(+), 0 deletions(-)
delete mode 100755 a.out
$ git format-patch -1 --no-binary HEAD
0001-no-binary.patch
$ git reset --hard HEAD^
HEAD is now at d5bdd17 A binary file
$ patch -p1 &lt; 0001-no-binary.patch
patching file a.out
Not deleting file a.out as content differs from patch
$ echo $?
1
$ cat 0001-no-binary.patch
From 9dd9f2b6c717d4125d790610941f258bdb573ee4 Mon Sep 17 00:00:00 2001
From: Laura Abbott &lt;[email protected]&gt;
Date: Wed, 2 Dec 2015 10:45:33 -0800
Subject: [PATCH] no binary
---
a.out | Bin 8784 -&gt; 0 bytes
1 file changed, 0 insertions(+), 0 deletions(-)
delete mode 100755 a.out
diff --git a/a.out b/a.out
deleted file mode 100755
index 3772793..0000000
Binary files a/a.out and /dev/null differ
--
2.5.0
$
This is the story behind
a recent bugzilla.
The patches generated on kernel.org only say that the binary files changed
so they can't actually be applied as diffs.
Git deals with binary files just fine though so it's possible to sneak
some in and end up with a tree that can't be easily expressed in patches.
Binary files usually don't have a place in the kernel, but some
did come in with a staging driver. The staging driver was
deleted
this merge window. Everything that isn't an official x.y kernel release (e.g.
4.3-rc4, 4.2.3) comes in as a patch file so all patches are going to be
unappliable until that commit makes it into an official release. The workaround
right now is to modify the patch to get rid of the binary file deletion. This
does mean the checksums aren't going to match against kernel.org but this is
only going to be the case until the next official release in rawhide which
should be sometime at the beginning of January. You'll just have to
trust the Fedora kernel team in the mean time.</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">FESCo Elections: Interview with Kevin Fenzi (kevin / nirik)</title>
<id>http://fedoraplanet.org/5/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/5/" />
<author>
<name>Fedora Community Blog</name>
</author>
<content type="html/text">Fedora Engineering Steering Council badge</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">FAmSCo Elections: Interview with Abdel Martínez (potty)</title>
<id>http://fedoraplanet.org/6/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/6/" />
<author>
<name>Fedora Community Blog</name>
</author>
<content type="html/text">Fedora Ambassador Steering Committee badge</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Three years and counting</title>
<id>http://fedoraplanet.org/7/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/7/" />
<author>
<name>Andrea Veri</name>
</author>
<content type="html/text">It’s been a while since my last “what’s been happening behind the scenes” e-mail so I’m here to report on what has been happening within the GNOME Infrastructure, its future plans and my personal sensations about a challenge that started around three (3) years ago when Sriram Ramkrishna and Jeff Schroeder proposed my name as a possible candidate for coordinating the team that runs the systems behind the GNOME Project. All this followed by the official hiring achieved by Karen Sandler back in February 2013.
The GNOME Infrastructure has finally reached stability both in terms of reliability and uptime, we didn’t have any service disruption this and the past year and services have been running smoothly as they were expected to in a project like the one we are managing.
As many of you know service disruptions and a total lack of maintenance were very common before I joined back in 2013, I’m so glad the situation has dramatically changed and developers, users, passionates are now able to reach our websites, code repositories, build machines without experiencing slowness, downtimes or
unreachability. Additionally all these groups of people now have a reference point they can contact in case they need help when coping with the infrastructure they daily use. The ticketing system allows users to get in touch with the members of the Sysadmin Team and receive support right away within a very short period of time (Also thanks to Pagerduty, service the Foundation is kindly sponsoring)
Before moving ahead to the future plans I’d like to provide you a summary of what has been done during these roughly three years so you can get an idea of why I define the changes that happened to the infrastructure a complete revamp:
Recycled several ancient machines migrating services off of them while consolidating them by placing all their configuration on our central configuration management platform ran by Puppet. This includes a grand total of 7 machines that were replaced by new hardware and extended warranties the Foundation kindly sponsored.
We strenghten our websites security by introducing SSL certificates everywhere and recently replacing them with SHA2 certificates.
We introduced several services such as Owncloud, the Commits Bot, the Pastebin, the Etherpad, Jabber, the GNOME Github mirror.
We restructured the way we backup our machines also thanks to the Fedora Project sponsoring the disk space on their backup facility. The way we were used to handle backups drastically changed from early years where a magnetic tape facility was in charge of all the burden of archiving our data to today where a NetApp is used together with rdiff-backup.
We upgraded Bugzilla to the latest release, a huge thanks goes to Krzesimir Nowak who kindly helped us building the migration tools.
We introduced the GNOME Apprentice program open-sourcing our internal Puppet repository and cleansing it (shallow clones FTW!) from any sensitive information which now lives on a different repository with restricted access.
We retired Mango and our OpenLDAP instance in favor of FreeIPA which allows users to modify their account information on their own without waiting for the Accounts Team to process the change.
We documented how our internal tools are customized to play together making it easy for future Sysadmin Team members to learn how the infrastructure works and supersede existing members in case they aren’t able to keep up their position anymore.
We started providing hosting to the GIMP and GTK projects which now completely rely on the GNOME Infrastructure. (DNS, email, websites and other services infrastructure hosting)
We started providing hosting not only to the GIMP and GTK projects but to localized communities as well such as GNOME Hispano and GNOME Greece
We configured proper monitoring for all the hosted services thanks to Nagios and Check-MK
We migrated the IRC network to a newer ircd with proper IRC services (Nickserv, Chanserv) in place.
We made sure each machine had a configured management (mgmt) and KVM interface for direct remote access to the bare metal machine itself, its hardware status and all the operations related to it. (hard reset, reboot, shutdown etc.)
We upgraded MoinMoin to the latest release and made a substantial cleanup of old accounts, pages marked as spam and trashed pages.
We deployed DNSSEC for several domains we manage including gnome.org, guadec.es, gnomehispano.es, guadec.org, gtk.org and gimp.org
We introduced an account de-activation policy which comes into play when a contributor not committing to any of the hosted repositories at git.gnome.org since two years is caught by the script. The account in question is marked as inactive and the gnomecvs (from the old cvs days) and ftpadmin groups are removed.
We planned mass reboots of all the machines roughly every month for properly applying security and kernel updates.
We introduced Mirrorbrain (MB), the mirroring service serving GNOME and related modules tarballs and software all over the world. Before introducing MB GNOME had several mirrors located in all the main continents and at the same time a very low amount of users making good use of them. Many organizations and companies behind these mirrors decided to not host GNOME sources anymore as the statistics of usage were very poor and preferred providing the same service to projects that really had a demand for these resources. MB solved all this allowing a proper redirect to the closest mirror (through mod_geoip) and making sure the sources checksum matched across all the mirrors and against the original tarball uploaded by a GNOME maintainer and hosted at master.gnome.org.
I can keep the list going for dozens of other accomplished tasks but I’m sure many of you are now more interested in what the future plans actually are in terms of where the GNOME Infrastructure should be in the next couple of years.
One of the main topics we’ve been discussing will be migrating our Git infrastructure away from cgit (which is mainly serving as a code browsing tool) to a more complete platform that is surely going to include a code review tool of some sort. (Gerrit, Gitlab, Phabricator)
Another topic would be migrating our mailing lists to Mailman 3 / Hyperkitty. This also means we definitely need a staging infrastructure in place for testing these kind of transitions ideally bound to a separate Puppet / Ansible repository or branch. Having a different repository for testing purposes will also mean helping apprentices to test their changes directly on a live system and not on their personal computer which might be running a different OS / set of tools than the ones we run on the GNOME Infrastructure.
What I also aim would be seeing GNOME Accounts being the only authentication resource in use within the whole GNOME Infrastructure. That means one should be able to login to a specific service with the same username / password in use on the other hosted services. That’s been on my todo list for a while already and it’s probably time to push it forward together with Patrick Puiterwijk, responsible of Ipsilon‘s development at Red Hat and GNOME Sysadmin.
While these are the top priority items we are soon receiving new hardware (plus extended warranty renewals for two out of the three machines that had their warranty renewed a while back) and migrating some of the VMs off from the current set of machines to the new boxes is definitely another task I’d be willing to look at in the next couple of months (one machine (ns-master.gnome.org) is being decommissioned giving me a chance to migrate away from BIND into NSD).
The GNOME Infrastructure is evolving and it’s crucial to have someone maintaining it. On this side I’m bringing to your attention the fact the assigned Sysadmin funds are running out as reported on the Board minutes from the 27th of October. On this side Jeff Fortin started looking for possible sponsors and
came up with the idea of making a brochure with a set of accomplished tasks that couldn’t have been possible without the Sysadmin fundraising campaign launched by Stormy Peters back in June 2010 being a success. The Board is well aware of the importance of having someone looking at the infrastructure that runs the GNOME Project and is making sure the brochure will be properly reviewed and published.
And now some stats taken from the Puppet Git Repository:$ cd /git/GNOME/puppet &amp;&amp; git shortlog -ns
3520 Andrea Veri
506 Olav Vitters
338 Owen W. Taylor
239 Patrick Uiterwijk
112 Jeff Schroeder
71 Christer Edwards
4 Daniel Mustieles
4 Matanya Moses
3 Tobias Mueller
2 John Carr
2 Ray Wang
1 Daniel Mustieles García
1 Peter Baumgartenand from the Request Tracker database (52388 being my assigned ID):mysql&gt; select count(*) from Tickets where LastUpdatedBy = '52388';
+----------+
| count(*) |
+----------+
| 3613 |
+----------+
1 row in set (0.01 sec)
mysql&gt; select count(*) from Tickets where LastUpdatedBy = '52388' and Status = 'Resolved';
+----------+
| count(*) |
+----------+
| 1596 |
+----------+
1 row in set (0.03 sec)It’s been a long run which made me proud, for the things I learnt, for the tasks I’ve been able to accomplish, for the great support the GNOME community gave me all the time and most of all for the same fact of being part of the team responsible of the systems hosting the GNOME Project. Thank you GNOME community for your continued and never ending backing, we daily work to improve how the services we host are delivered to you and the support we receive back is fundamental for our passion and enthusiasm to remain high!
</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Is giving money to Conservancy the best course of action?</title>
<id>http://fedoraplanet.org/8/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/8/" />
<author>
<name>Daniel Pocock</name>
</author>
<content type="html/text">There has been a lot of discussion lately about Software Freedom Conservancy's fundraiser.
Various questions come to my mind:
Is this the only way to achieve goals such as defending copyright? (There are other options, like corporate legal insurance policies)
When all the options are compared, is Conservancy the best one? Maybe it is, but it would be great to confirm why we reached that conclusion.
Could it be necessary to choose two or more options that complement each other? Conservancy may just be one part of the solution and we may get a far better outcome if money is divided between Conservancy and insurance and something else.
What about all the other expenses that developers incur while producing free software? Many other professionals, like doctors, do work that is just as valuable for society but they are not made to feel guilty about asking for payment and reimbursement. (In fact, for doctors, there is no shortage of it from the drug companies).
There seems to be an awkwardness about dealing with money in the free software world and it means many projects continue to go from one crisis to the next. Just yesterday on another mailing list there was discussion about speakers regularly asking for reimbursement to attend conferences and at least one strongly worded email appeared questioning whether people asking about money are sufficiently enthusiastic about free software or if they are only offering to speak in the hope their trip will be paid.
The DebConf team experienced one of the more disappointing examples of a budget communication issue when developers who had already volunteered long hours to prepare for the event then had to give up valuable time during the conference to wash the dishes for 300 people. Had the team simply insisted that the high cost of local labor was known when the country was selected then the task could have been easily outsourced to local staff. This came about because some members of the community felt nervous about asking for budget and other people couldn't commit to spend.
Rather than stomping on developers who ask about money or anticipate the need for it in advance, I believe we need to ask people if money was not taboo, what is the effort they could contribute to the free software world and how much would they need to spend in a year for all the expenses that involved. After all, isn't that similar to the appeal from Conservancy's directors? If all developers and contributors were suitably funded, then many people would budget for contributions to Conservancy, other insurances, attending more events and a range of other expenses that would make the free software world operate more smoothly.
In contrast, the situation we have now (for event-related expenses) is that developers funding themselves or with tightly constrained budgets or grants often have to spend hours picking through AirBNB and airline web sites trying to get the best deal while those few developers who do have more flexible corporate charge cards just pick a convenient hotel and don't lose any time reading through the fine print to see if there are charges for wifi, breakfast, parking, hidden taxes and all the other gotchas because all of that will be covered for them.
With developer budgets/wishlists documented, where will the money come from? Maybe it won't appear, maybe it will. But if we don't ask for it at all, we are much less likely to get anything. Mozilla has recently suggested that developers need more cash and offered to put $1 million on the table to fix the problem, is it possible other companies may see the benefit of this and put up some cash too?
The time it takes to promote one large budget and gather donations is probably far more efficient than the energy lost firefighting lots of little crisis situations.
Being more confident about money can also do a lot more to help engage people and make their participation sustainable in the long term. For example, if a younger developer is trying to save the equivalent of two years of their salary to paying a deposit on a house purchase, how will they feel about giving money to Conservancy or pay their own travel expenses to a free software event? Are their families and other people they respect telling them to spend or to save and if our message is not compatible with that, is it harder for us to connect with these people?
One other thing to keep in mind is that budgeting needs to include the costs of those who may help the fund-raising and administration of money. If existing members of our projects are not excited about doing such work we have to be willing to break from the "wait for a volunteer or do-it-yourself" attitude. There are so many chores that we are far more capable of doing as developers that we still don't have time for, we are only fooling ourselves if we anticipate that effective fund-raising will take place without some incentives going back to those who do the work.
</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">OpenHardware and code signing (update)</title>
<id>http://fedoraplanet.org/9/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/9/" />
<author>
<name>Richard Hughes</name>
</author>
<content type="html/text">I posted a few weeks ago about the difficulty of providing device-side verification of firmware updates, at the same time remaining OpenHardware and thus easily hackable. The general consensus was that allowing anyone to write any kind of firmware to the device without additional authentication was probably a bad idea, even for OpenHardware devices. I think I’ve come up with an acceptable compromise I can write up as a recommendation, as per usual using the ColorHug+ as an example. For some background, I’ve sold nearly 3,000 original ColorHug devices, and in the last 4 years just three people wanted help writing custom firmware, so I hope you can see the need to protect the majority is so much larger than making the power users happy.
ColorHug+ will be supplied with a bootloader that accepts only firmware encrypted with the secret XTEA key I that I’m using for my devices. XTEA is an acceptable compromise between something as secure as ECC, but that’s actually acceptable in speed and memory usage for a 8-bit microcontroller running at 6MHz with 8k of ROM. Flashing a DIY or modified firmware isn’t possible, and by the same logic flashing a malicious firmware will also not work.
To unlock the device (and so it stays OpenHardware) you just have to remove the two screws, and use a paper-clip to connect TP5 and GND while the device is being plugged into the USB port. Both lights will come on, and stay on for 5 seconds and then the code protection is turned off. This means you can now flash any home-made or malicious firmware to the device as you please.
There are downsides to unlocking; you can’t re-lock the hardware so it supports official updates again. I don’t know if this is a huge problem; flashing home-made firmware could damage the device (e.g. changing the pin mapping from input to output and causing something to get hot). If this is a huge problem I can fix CH+ to allow re-locking and fix up the guidelines, although I’m erring on unlocking being a one way operation.
Comments welcome.</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">FAmSCo Elections: Interview with Dan Mossor (danofsatx)</title>
<id>http://fedoraplanet.org/10/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/10/" />
<author>
<name>Fedora Community Blog</name>
</author>
<content type="html/text">Fedora Ambassador Steering Committee badge</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Automatic Upstream Dependency Testing</title>
<id>http://fedoraplanet.org/11/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/11/" />
<author>
<name>Alexander Todorov</name>
</author>
<content type="html/text">Ever since
RHEL 7.2 python-libs broke s3cmd
I've been pondering an age old problem: How do I know if my software works with the
latest upstream dependencies ? How can I pro-actively monitor for new versions
and add them to my test matrix ?
Mixing together my previous experience with
Difio
and monitoring upstream sources,
and Forbes Lindesay's GitHub Automation talk
at DEVit Conf I came
together with a plan:
Make an application which will execute when new upstream version is available;
Automatically update .travis.yml
for the projects I'm interested in;
Let Travis-CI execute my test suite for all available upstream versions;
Profit!
How Does It Work
First we need to monitor upstream! RubyGems.org has nice
webhooks interface,
you can even trigger on individual packages. PyPI however doesn't have anything
like this :(. My solution is to run a cron job every hour and parse their RSS
stream for newly released packages. This has been working previously for Difio
so I re-used one function from the code.
After finding anything we're interested in comes the hard part - automatically
updating .travis.yml using the GitHub API. I've described this in more detail
here. This time
I've slightly modified the code to update only when needed and accept more
parameters so it can be reused.
Travis-CI has a clean interface to specify environment variables and
defining several
of them crates a test matrix. This is exactly what I'm doing.
.travis.yml is updated with a new ENV setting, which determines the upstream
package version. After commit new build is triggered which includes the expanded
test matrix.
Example
Imagine that our Project 2501 depends on FOO version 0.3.1. The
build log illustrates what
happened:
Build #9 is what we've tested with FOO-0.3.1 and released to production.
Test result is PASS!
Build #10 - meanwhile upstream releases FOO-0.3.2 which causes our project
to break. We're not aware of this and continue developing new features
while all test results still PASS! When our customers upgrade their systems
Project 2501 will break ! Tests didn't catch it because test matrix wasn't
updated. Please
ignore the actual commit message in the example! I've used the same repository
for the dummy dependency package.
Build #11 - the monitoring solution finds FOO-0.3.2 and updates the test
matrix automatically. The build immediately breaks! More precisely the
test with version 0.3.2 fails!
Build #12 - we've alerted FOO.org about their problem and they've released
FOO-0.3.3. Our monitor has found that and updated the test matrix.
However the 0.3.2 test job still fails!
Build #13 - we decide to workaround the 0.3.2 failure or simply handle the
error gracefully. In this example I've removed version 0.3.2 from the test
matrix to simulate that. In reality I wouldn't touch .travis.yml but instead
update my application and tests to check for that particular version.
All test results are PASS again!
Btw Build #11 above was triggered manually (./monitor.py) while Build #12
came from OpenShit, my hosting environment.
At present I have this monitoring enabled for my
new Markdown extensions
and will also add it to django-s3-cache
once it migrates to Travis-CI (it uses drone.io now).
Enough Talk, Show me the Code
monitor.py#!/usr/bin/env python
import os
import sys
import json
import base64
import httplib
from pprint import pprint
from datetime import datetime
from xml.dom.minidom import parseString
def get_url(url, post_data = None):
# GitHub requires a valid UA string
headers = {
'User-Agent' : 'Mozilla/5.0 (X11; Linux x86_64; rv:10.0.5) Gecko/20120601 Firefox/10.0.5',
}
# shortcut for GitHub API calls
if url.find("://") == -1:
url = "https://api.github.com%s" % url
if url.find('api.github.com') &gt; -1:
if not os.environ.has_key("GITHUB_TOKEN"):
raise Exception("Set the GITHUB_TOKEN variable")
else:
headers.update({
'Authorization': 'token %s' % os.environ['GITHUB_TOKEN']
})
(proto, host_path) = url.split('//')
(host_port, path) = host_path.split('/', 1)
path = '/' + path
if url.startswith('https'):
conn = httplib.HTTPSConnection(host_port)
else:
conn = httplib.HTTPConnection(host_port)
method = 'GET'
if post_data:
method = 'POST'
post_data = json.dumps(post_data)
conn.request(method, path, body=post_data, headers=headers)
response = conn.getresponse()
if (response.status == 404):
raise Exception("404 - %s not found" % url)
result = response.read().decode('UTF-8', 'replace')
try:
return json.loads(result)
except ValueError:
# not a JSON response
return result
def post_url(url, data):
return get_url(url, data)
def monitor_rss(config):
"""
Scan the PyPI RSS feeds to look for new packages.
If name is found in config then execute the specified callback.
@config is a dict with keys matching package names and values
are lists of dicts
{
'cb' : a_callback,
'args' : dict
}
"""
rss = get_url("https://pypi.python.org/pypi?:action=rss")
dom = parseString(rss)
for item in dom.getElementsByTagName("item"):
try:
title = item.getElementsByTagName("title")[0]
pub_date = item.getElementsByTagName("pubDate")[0]
(name, version) = title.firstChild.wholeText.split(" ")
released_on = datetime.strptime(pub_date.firstChild.wholeText, '%d %b %Y %H:%M:%S GMT')
if name in config.keys():
print name, version, "found in config"
for cfg in config[name]:
try:
args = cfg['args']
args.update({
'name' : name,
'version' : version,
'released_on' : released_on
})
# execute the call back
cfg['cb'](**args)
except Exception, e:
print e
continue
except Exception, e:
print e
continue
def update_travis(data, new_version):
travis = data.rstrip()
new_ver_line = " - VERSION=%s" % new_version
if travis.find(new_ver_line) == -1:
travis += "\n" + new_ver_line + "\n"
return travis
def update_github(**kwargs):
"""
Update GitHub via API
"""
GITHUB_REPO = kwargs.get('GITHUB_REPO')
GITHUB_BRANCH = kwargs.get('GITHUB_BRANCH')
GITHUB_FILE = kwargs.get('GITHUB_FILE')
# step 1: Get a reference to HEAD
data = get_url("/repos/%s/git/refs/heads/%s" % (GITHUB_REPO, GITHUB_BRANCH))
HEAD = {
'sha' : data['object']['sha'],
'url' : data['object']['url'],
}
# step 2: Grab the commit that HEAD points to
data = get_url(HEAD['url'])
# remove what we don't need for clarity
for key in data.keys():
if key not in ['sha', 'tree']:
del data[key]
HEAD['commit'] = data
# step 4: Get a hold of the tree that the commit points to
data = get_url(HEAD['commit']['tree']['url'])
HEAD['tree'] = { 'sha' : data['sha'] }
# intermediate step: get the latest content from GitHub and make an updated version
for obj in data['tree']:
if obj['path'] == GITHUB_FILE:
data = get_url(obj['url']) # get the blob from the tree
data = base64.b64decode(data['content'])
break
old_travis = data.rstrip()
new_travis = update_travis(old_travis, kwargs.get('version'))
# bail out if nothing changed
if new_travis == old_travis:
print "new == old, bailing out", kwargs
return
####
#### WARNING WRITE OPERATIONS BELOW
####
# step 3: Post your new file to the server
data = post_url(
"/repos/%s/git/blobs" % GITHUB_REPO,
{
'content' : new_travis,
'encoding' : 'utf-8'
}
)
HEAD['UPDATE'] = { 'sha' : data['sha'] }
# step 5: Create a tree containing your new file
data = post_url(
"/repos/%s/git/trees" % GITHUB_REPO,
{
"base_tree": HEAD['tree']['sha'],
"tree": [{
"path": GITHUB_FILE,
"mode": "100644",
"type": "blob",
"sha": HEAD['UPDATE']['sha']
}]
}
)
HEAD['UPDATE']['tree'] = { 'sha' : data['sha'] }
# step 6: Create a new commit
data = post_url(
"/repos/%s/git/commits" % GITHUB_REPO,
{
"message": "New upstream dependency found! Auto update .travis.yml",
"parents": [HEAD['commit']['sha']],
"tree": HEAD['UPDATE']['tree']['sha']
}
)
HEAD['UPDATE']['commit'] = { 'sha' : data['sha'] }
# step 7: Update HEAD, but don't force it!
data = post_url(
"/repos/%s/git/refs/heads/%s" % (GITHUB_REPO, GITHUB_BRANCH),
{
"sha": HEAD['UPDATE']['commit']['sha']
}
)
if data.has_key('object'): # PASS
pass
else: # FAIL
print data['message']
if __name__ == "__main__":
config = {
"atodorov-test" : [
{
'cb' : update_github,
'args': {
'GITHUB_REPO' : 'atodorov/bztest',
'GITHUB_BRANCH' : 'master',
'GITHUB_FILE' : '.travis.yml'
}
}
],
"Markdown" : [
{
'cb' : update_github,
'args': {
'GITHUB_REPO' : 'atodorov/Markdown-Bugzilla-Extension',
'GITHUB_BRANCH' : 'master',
'GITHUB_FILE' : '.travis.yml'
}
},
{
'cb' : update_github,
'args': {
'GITHUB_REPO' : 'atodorov/Markdown-No-Lazy-Code-Extension',
'GITHUB_BRANCH' : 'master',
'GITHUB_FILE' : '.travis.yml'
}
},
{
'cb' : update_github,
'args': {
'GITHUB_REPO' : 'atodorov/Markdown-No-Lazy-BlockQuote-Extension',
'GITHUB_BRANCH' : 'master',
'GITHUB_FILE' : '.travis.yml'
}
},
],
}
# check the RSS to see if we have something new
monitor_rss(config)
</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">All systems go</title>
<id>http://fedoraplanet.org/12/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/12/" />
<author>
<name>Fedora Infrastructure Status</name>
</author>
<content type="html/text">New status good: Everything seems to be working. for services: Zodbot IRC bot, FedoraHosted.org Services, Mailing Lists</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Netflix and Linux: The First 60 Seconds</title>
<id>http://fedoraplanet.org/13/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/13/" />
<author>
<name>Justin W. Flory</name>
</author>
<content type="html/text">Netflix and Linux may not agree on the desktop, but they do in the cloud. Source: blockless.com</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">There are scheduled downtimes in progress</title>
<id>http://fedoraplanet.org/14/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/14/" />
<author>
<name>Fedora Infrastructure Status</name>
</author>
<content type="html/text">New status scheduled: Scheduled maintenance in progress for services: Zodbot IRC bot, Mailing Lists, FedoraHosted.org Services</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Mumble ready for testing</title>
<id>http://fedoraplanet.org/15/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/15/" />
<author>
<name>Fedora Community Blog</name>
</author>
<content type="html/text">Mumble is back
Mumble, a free and open-source VoIP program</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">lsns(8) new command to list Linux namespaces</title>
<id>http://fedoraplanet.org/16/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/16/" />
<author>
<name>Karel Zak</name>
</author>
<content type="html/text">The namespaces are commonly used way how to isolate global (ipc, mount, net, ...) resource instances. Unfortunately, we have no command line tool to list namespaces. The new command lsns(8) tries to fill this gap. Examples:# lsns NS TYPE NPROCS PID USER COMMAND4026531836 pid 276 1 root /usr/lib/systemd/systemd --system --deserialize 154026531837 user 276 1 root /usr/lib/systemd/systemd --system --deserialize 154026531838 uts 276 1 root /usr/lib/systemd/systemd --system --deserialize 154026531839 ipc 276 1 root /usr/lib/systemd/systemd --system --deserialize 154026531840 mnt 269 1 root /usr/lib/systemd/systemd --system --deserialize 154026531857 mnt 1 63 root kdevtmpfs4026531963 net 275 1 root /usr/lib/systemd/systemd --system --deserialize 154026532189 mnt 1 545 root /usr/lib/systemd/systemd-udevd4026532390 net 1 776 rtkit /usr/libexec/rtkit-daemon4026532478 mnt 1 776 rtkit /usr/libexec/rtkit-daemon4026532486 mnt 1 847 colord /usr/libexec/colord4026532518 mnt 3 6500 root -bash and list namespace content:# lsns 4026532518 PID PPID USER COMMAND 6500 6372 root -bash19572 6500 root └─/usr/bin/mc -P /tmp/mc-root/mc.pwd.650019575 19572 root └─bash -rcfile .bashrc help output with columns description: # lsns -h Usage: lsns [options] [namespace] List system namespaces. Options: -J, --json use JSON output format -l, --list use list format output -n, --noheadings don't print headings -o, --output list define which output columns to use -p, --task pid print process namespaces -r, --raw use the raw output format -u, --notruncate don't truncate text in columns -t, --type name namespace type (mnt, net, ipc, user, pid, uts) -h, --help display this help and exit -V, --version output version information and exit Available columns (for --output): NS namespace identifier (inode number) TYPE kind of namespace PATH path to the namespace NPROCS number of processes in the namespace PID lowers PID in the namespace PPID PPID of the PID COMMAND command line of the PID UID UID of the PID USER username of the PID For more details see lsns(8). The important detail is that you can see only namespaces accessible from currently mounted /proc filesystem. The lsns(8) is not able to list persistent namespaces without processes where the namespace instance is hold by bind mounts of the /proc/[pid]/ns/[type] files and the output may be affected by unshared PID namespace and unshared /proc (see unshare(8) for more details). ... it will be probably available in util-linux v2.28 (~ January 2016).</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">FAmSCo Elections: Interview with Gabriele Trombini (mailga)</title>
<id>http://fedoraplanet.org/17/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/17/" />
<author>
<name>Fedora Community Blog</name>
</author>
<content type="html/text">Fedora Ambassador Steering Committee badge</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Hosting Multiple Python WSGI Scripts on OpenShift</title>
<id>http://fedoraplanet.org/18/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/18/" />
<author>
<name>Alexander Todorov</name>
</author>
<content type="html/text">With OpenShift you can host WSGI Python
applications. By default the Python cartridge comes with a simple WSGI app
and the following directory layout
./
./.openshift/
./requirements.txt
./setup.py
./wsgi.py
</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Commit a file with the GitHub API and Python</title>
<id>http://fedoraplanet.org/19/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/19/" />
<author>
<name>Alexander Todorov</name>
</author>
<content type="html/text">How do you commit changes to a file using the GitHub API ?
I've found
this post
by Levi Botelho which explains the necessary steps but without any code.
So I've used it and created a
Python example.
I've rearranged the steps so that all write operations follow after a certain
section in the code and also added an intermediate section which creates the
updated content based on what is available in the repository.
I'm just appending
versions of Markdown to the .travis.yml (I will explain why in my next post)
and this is hard-coded for the sake of example. All content related operations
are also based on the GitHub API because I want to be independent of the source
code being around when I push this script to a hosting provider.
I've tested this script against itself. In the
commits log
you can find the Automatic update to Markdown-X.Y messages. These are
from the script. Also notice the Merge remote-tracking branch 'origin/master'
messages, these appeared when I pulled to my local copy. I believe the
reason for this is that I have some dangling trees and/or commits from
the time I was still experimenting with a broken script. I've tested on another
clean repository and there are
no such merges.
IMPORTANT
For this to work you need to properly authenticate with GitHub. I've crated
a new token at https://github.com/settings/tokens with the public_repo
permission and that works for me.</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">podlators-4.00 in Rawhide</title>
<id>http://fedoraplanet.org/20/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/20/" />
<author>
<name>Perl SIG</name>
</author>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Trying Talky.io</title>
<id>http://fedoraplanet.org/21/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/21/" />
<author>
<name>Luya Tshimbalanga</name>
</author>
<content type="html/text">Talky.io have updated their website featuring their WebRTC chat. One of intriguing feature is the support up to 15 people highlighted below.Revamped Talky.io website featuring WebRTCIt appears their systems is a worthy alterntive of Google HangOut. It will be nice project like Empathy carries more love.</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Virtualbox on Fedora</title>
<id>http://fedoraplanet.org/22/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/22/" />
<author>
<name>Carlos Morel-Riquelme</name>
</author>
<content type="html/text">Make simpleFirst update your kernel and later reboot your machine[root@new-host-5 asleqia]# dnf -y update kernel &amp;&amp; rebootNow we need install the dependencies and some kernel modules[root@new-host-5
asleqia]# dnf -y install binutils gcc make patch libgomp glibc-headers
glibc-devel dkms kernel-devel kernel-core kernel-headers kernel-modules
kernel-modules-extraNow we need download and install Virtualbox32 bits[root@new-host-5 asleqia]# dnf -y install http://download.virtualbox.org/virtualbox/5.0.10/VirtualBox-5.0-5.0.10_104061_fedora22-1.i686.rpm64 bits[root@new-host-5 asleqia]# dnf -y install http://download.virtualbox.org/virtualbox/5.0.10/VirtualBox-5.0-5.0.10_104061_fedora22-1.x86_64.rpmRun virtualbox script[root@new-host-5 asleqia]# sudo /etc/init.d/vboxdrv setupStopping VirtualBox kernel modules [ OK ]Uninstalling old VirtualBox DKMS kernel modules [ OK ]Trying to register the VirtualBox kernel modules using DKMS[ OK ]Starting VirtualBox kernel modules [ OK ][root@new-host-5 asleqia]# Add your username to the virtualbox group[root@new-host-5 asleqia]# usermod -a -G vboxusers $USERit’s all.</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">How Is Fossaegean Doing?</title>
<id>http://fedoraplanet.org/23/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/23/" />
<author>
<name>Giannis Konstantinidis</name>
</author>
<content type="html/text">I have been enrolled at the University of the Aegean for more than two years so far. It is a multi-campus university located in six (6) Greek islands: Chios, Lemnos, Lesvos, Rhodes, Samos and Syros. The Dept. of Information &amp; Communication Systems Engineering, where I'm studying, is based in the town of Karlovassi in Samos.
Since I moved into the island, one of the first things I did was to find out if there were any people around interested in free &amp; open source technologies. Luckily, there was this community called fossaegean, which pretty much stands for Free &amp; Open Source Software Community of the University of the Aegean. However, it was not that active back then.
Let me tell you something: I'm not just passionate about free &amp; open source software, I'm crazy about it. And I certainly enjoy spreading the word about things I value. That is why, together with other people, we decided to put some effort and bring the community back to life.
Over the last two academic years, we have organized more than fourteen (14) events (mostly workshops and presentations). For this academic year we had set a goal of ten (10) events, and within three (3) months we are already past seven (7). This probably makes us one of the most (if not the most) active tech-related student communities in our university.
During our "Intro to HTML" workshop (photo by Zacharias Mitzelos, CC BY-NC-ND).
Some of our very recent activities include: Intros to HTML &amp; CSS (part of our web dev series of workshops), a Fedora 23 Release Party, an Arduino workshop and not-to-forget those great OpenBBQs. For more info regarding our Events, you can have a look at this page in our wiki. Where do all these take place? Thankfully, we have our own space provided by the university. A soon-to-be fully-equiped hackerspace I would say!
Greek Fedora contributors, alongside people from our community, during FOSSCOMM 2015 (photo by Zacharias Mitzelos, CC BY-NC-ND).
What could you expect in the near future? Plenty of workshops, for sure. We have some interesting topics, including Android, Arduino, BASH, Bitcoin, Fedora, Firefox OS, JavaScript, Jekyll, Ruby/Ruby on Rails and many more. But it's not just about the workshops; our goal is to bring students together and do stuff. There are quite a few projects we have in mind and I really can't wait to share more details with you.
Our people are the ones that make things possible and keep the space running. A big shoutout to Christos Sotirelis, George Makrakis, Vicky Tsima, Zacharias Mitzelos and many more, who currently act as the backbone of our community.
Exciting times ahead, wish us the best of luck! :)</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">Virtualbox en Fedora 23</title>
<id>http://fedoraplanet.org/24/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/24/" />
<author>
<name>Carlos Morel-Riquelme</name>
</author>
<content type="html/text">Házlo simple.Primero actualiza tu kernel y luego reinicia[root@new-host-5 asleqia]# dnf -y update kernel &amp;&amp; rebootAhora instalamos las dependecias y los módulos necesarios del kernel[root@new-host-5 asleqia]# dnf -y install binutils gcc make patch libgomp glibc-headers glibc-devel dkms kernel-devel kernel-core kernel-headers kernel-modules kernel-modules-extraPaso seguido descargamos e instalamos Virtualbox32 bits[root@new-host-5 asleqia]# dnf -y install http://download.virtualbox.org/virtualbox/5.0.10/VirtualBox-5.0-5.0.10_104061_fedora22-1.i686.rpm64 bits[root@new-host-5 asleqia]# dnf -y install http://download.virtualbox.org/virtualbox/5.0.10/VirtualBox-5.0-5.0.10_104061_fedora22-1.x86_64.rpmEjecutamos el script de virtualbox ( OJO que cada vez que actualizen el kernel y los módulos deberán de volver a ejecutar el script. [root@new-host-5 asleqia]# sudo /etc/init.d/vboxdrv setupStopping VirtualBox kernel modules [ OK ]Uninstalling old VirtualBox DKMS kernel modules [ OK ]Trying to register the VirtualBox kernel modules using DKMS[ OK ]Starting VirtualBox kernel modules [ OK ][root@new-host-5 asleqia]# Agregamos nuestro usuario al grupo de Virtualbox[root@new-host-5 asleqia]# usermod -a -G vboxusers $USERListo.</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">All systems go</title>
<id>http://fedoraplanet.org/25/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/25/" />
<author>
<name>Fedora Infrastructure Status</name>
</author>
<content type="html/text">Service 'COPR Build System' now has status: good: Everything seems to be working.</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">A look at the kernel bisection scripts</title>
<id>http://fedoraplanet.org/26/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/26/" />
<author>
<name>Laura Abbott</name>
</author>
<content type="html/text">I've been hacking on the bisection scripts for quite some time now.
Things got stalled for a bit in October/November. I introduced
several bugs which caused me to lose multiple days of testing verification so
I took a break and worked on other things to relieve my frustrations.
They are now at the point where they could use some testing besides my own.
Here's a walk through of what I have
F21 is going to be going EOL soon. The current (and final) kernel is
4.1.13-101.fc21. An upgrade to F23 might put you at 4.2.6-300.fc23. Upgrades
between major versions are a common point at which things break. Let's
pretend that something in the kernel broke between those two versions.
Grab a copy of the bisect scripts
$ git clone https://pagure.io/fedbisect.git
$ cd fedbisect
This contains the scripts. In order to bisect, we need copies of the git trees.
The bisect scripts will take care of this. Everything will be stored in a
subidrectory. This allows multiple
bisects to be going on at the same time. Each command will take the target
directory as an arguemnt. Generally the form will be ./fedbisect.sh &lt;command&gt;
&lt;target dir&gt;. For this example, the target name will be broken-things. The
first step is to sync the trees
$ ./fedbisect.sh sync broken-things
&lt;take a break while this syncs, it may take a while&gt;
a directory named broken-things is now present. Inside the directory:
$ ls broken-things/
bisect-step kernel pkg-git step-0
kernel is a clone of the tree from kernel.org, pkg-git is the fedora
repository. bisect-step and step-0 are part of the state for bisection. To
actually start a bisect between the two kernel versions
$ ./fedbisect.sh start broken-things 4.2.6-300 4.1.13-101
Note the order, it's bad tag first followed by good tag.
Behinds the scenes, this is setting up the kernel tree to run git bisect. If
you look at the kernel tree you will see exactly that:
$ cd broken-things/kernel
$ git bisect log
# bad: [1c02865136fee1d10d434dc9e3616c8e39905e9b] Linux 4.2.6
# good: [1f2ce4a2e7aea3a2123b17aff62a80553df31e21] Linux 4.1.13
git bisect start 'v4.2.6' 'v4.1.13'
Now you can build
$ ./fedbisect.sh build broken-things
This is another command that will take a long time to run. In order for these
scripts to be better than a regular bisect, the patches from Fedora need to
be applied. Figuring out which set of patches to be applied is tricky as noted
previously and brute force is still the best solution. With the exception of
a few commits in the merge window, most commits will build but if for some
reason no appropriate patches can be found, an RPM will be generated of just
the upstream version. At the end there will be a message such as
Got a build that built! Check in /home/labbott/fedbisect/broken-things/step-0 for rpms
and in that folder there will be RPMs to install (there will also be a number
of logs showing what exactly failed. Those can be ignored).
$ ls broken-things/step-0/*.rpm
broken-things/step-0/kernel-9.9.9-0.x86_64.rpm
broken-things/step-0/kernel-devel-9.9.9-0.x86_64.rpm
broken-things/step-0/kernel-headers-9.9.9-0.x86_64.rpm
The RPMs are generated from a custom kernel.spec. It's mostly the same as
the regular one but lots of stuff has been ripped out (perf, debug options,
cpu power util etc.) and it's just one big package. This was mostly for ease
of generation of the RPM. When generating snapshots, it turned out to be
a pain to figure out which filters to apply, especially if module names
changed. Copying over parts and editing where necessary seemed like an uphill
battle for not much value. The lifespan of these bisection images is going
to be very short so making the trade off for build ease and time (copying
modules takes a loooong time) seemed reasonable. In order
to make sure the kernel will always install the version number is 9.9.9-bisect_step
so each installation step should be increasing.
Once the kernel is installed, tests can be run. When there is a result,
the build can be marked as good
$ ./fedbisect.sh good broken-things
or bad
$ ./fedbisect.sh bad broken-things
or it can be skipped if the build is untestable
$ ./fedbisect.sh skip broken-things
Now you can build again
$ ./fedbisect.sh build broken-things
and repeat marking the build as good or bad until the bisect scripts
indicate that a broken commit is found.
These scripts are still in the testing states so there may be problems.
I suspect most of them will be in the setup phase. The scripts are
available on pagure . Feedback/bug
reports/pull requests are very welcome. Suggestions for future
extensions are also welcome although I have my own list there as well.</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">There are scheduled downtimes in progress</title>
<id>http://fedoraplanet.org/27/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/27/" />
<author>
<name>Fedora Infrastructure Status</name>
</author>
<content type="html/text">Service 'COPR Build System' now has status: scheduled: Scheduled cloud outage in progress</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">DNS for your Vagrant needs: with Landrush, libvirt and dnsmasq</title>
<id>http://fedoraplanet.org/28/</id>
<updated>2015-12-03T07:45:42Z</updated>
<link href="http://fedoraplanet.org/28/" />
<author>
<name>Josef Strzibny</name>
</author>
<content type="html/text">Have you ever needed a DNS server that would be visible both on your host and your Vagrant guests? Landrush is one of those things that can pretty much save you. Unfortunately it was designed around VirtualBox and Mac OS, so it does not work on Linux out-of-the-box. And it does not work with libvirt provider at all. Until of course recently since I added the support there. Here is how to make all that work together on Fedora.
First things first — my libvirt patch is not yet merged, so you will have to build Landrush yourself. Check out my fork of Landrush and build the plugin with rake build, than you can install it with vagrant plugin install command:
$ bundle
$ bundle exec rake build
$ vagrant plugin install ./pkg/landrush-0.18.0.gem
This expects you to have Bundler and Vagrant installed. If you don’t, check Fedora Developer Portal and learn how to do it.
Now you should be able to run Landrush and it should work just fine for your guests. To confirm that Landrush is running run vagrant landrush status. Let’s make it work on Linux host too! On Mac OS Landrush adds entries in /etc/resolver, unfortunately that won’t work on Linux. That’s why I put dnsmasq in the title of this post.
We can tell dnsmasq to listen on 127.0.0.1 (localhost) and make an entry to redirect requested domain names (such as all ending with .dev or .local for example) to our Landrush DNS server (which runs on localhost too, but on port 10053 instead of standard 53). Let’s do it:
Add the following to /etc/dnsmasq.conf:
listen-address=127.0.0.1
And create a following file to redirect our .local domains traffic to Landrush:
$ cat /etc/dnsmasq.d/vagrant-landrush
address=/.local/127.0.0.1#10053
Now let’s try to start dnsmasq service:
$ sudo systemctl start dnsmasq.service
$ sudo systemctl status dnsmasq.service
● dnsmasq.service - DNS caching server.
Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sun 2015-11-29 10:13:17 CET; 4s ago
Process: 26654 ExecStart=/usr/sbin/dnsmasq -k (code=exited, status=2)
Main PID: 26654 (code=exited, status=2)
Nov 29 10:13:17 strzibny-x1 systemd[1]: Started DNS caching server..
Nov 29 10:13:17 strzibny-x1 systemd[1]: Starting DNS caching server....
Nov 29 10:13:17 strzibny-x1 dnsmasq[26654]: dnsmasq: failed to create listening socket for port 53: Address already in use
Nov 29 10:13:17 strzibny-x1 systemd[1]: dnsmasq.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 29 10:13:17 strzibny-x1 systemd[1]: Unit dnsmasq.service entered failed state.
Nov 29 10:13:17 strzibny-x1 systemd[1]: dnsmasq.service failed.
Oh no. It seems that we have a conflict here. This is because libvirt actually starts dnsmasq for your domains as well automatically.
We can fix it by telling the system version of dnsmasq to bind to specific interfaces. Open the /etc/dnsmasq.conf again and list only the interfaces you need (and don’t conflict):
listen-address=127.0.0.1
...
# on my system
interface=wlp4s0
bind-interfaces
The service should start just fine afterwards. Let’s see if we can resolve our host:
$ host site.local
Host site.local not found: 3(NXDOMAIN)
We have dnsmasq set up, but it’s not used. For that we need to edit /etc/resolv.conf and add our new name server:
nameserver 127.0.0.1
...
Is this working?
$ host site.local
site.local has address 127.0.0.1
Great! Can we ping it yet? Yes and no. If you went with .dev domain name, you are fine, but if you went with my changes and setup .local instead, ping won’t see your new settings. This is because of Avahi.
To change the domain for Avahi from .local, edit the avahi-daemon.conf configuration file and restart avahi-daemon:
$ cat /etc/avahi/avahi-daemon.conf
[server]
domain-name=.something_else_than_local
...
$ sudo systemctl restart avahi-daemon
If you don’t really need Avahi, you can also change the following in nsswitch.conf:
$ cat /etc/nsswitch.conf
...
#hosts: files mdns4_minimal [NOTFOUND=return] dns
hosts: files dns
Now you can ping your development hostnames and they should be redirected to your VM by dnsmasq and Landrush.
If you want to check that it works alongside port forwarding, you can tell Vagrant to forward port 8080 from your host to 8000 on your guest and run simple HTTP server there:
# cat Vagrantfile
...
config.vm.network "forwarded_port", guest: 8000, host: 8080
# Fedora 23 example
config.vm.provision "shell", inline: &lt;&lt;-SHELL
python3 -m http.server &amp;
SHELL
...
Afterwards you can open your browser or use curl:
$ curl http://site.local:8080
&lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"&gt;
&lt;html&gt;
&lt;head&gt;
&lt;meta http-equiv="Content-Type" content="text/html; charset=utf-8"&gt;
&lt;title&gt;Directory listing for /&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
&lt;h1&gt;Directory listing for /&lt;/h1&gt;
&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=".bash_history"&gt;.bash_history&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=".bash_logout"&gt;.bash_logout&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=".bash_profile"&gt;.bash_profile&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=".bashrc"&gt;.bashrc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=".ssh/"&gt;.ssh/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;/body&gt;
&lt;/html&gt;
</content>
</entry>
<entry xml:base="http://fedoraplanet.org/">
<title type="text">FESCo Elections: Interview with Adam Miller (maxamillion)</title>
<id>http://fedoraplanet.org/29/</id>