Robótica, Automatización, control industrial, microcontroladores, electrónica digital
Control de relés por enlace de 2,4 GHz – módulos NRF24L01 (Domótica 4) La función de este artículo es dar ejemplos de comunicación inalámbrica entre dos placas Arduino, utilizando el módulo transceptor basado en el chip NRF24L01. En la imagen se observa dos formatos de módulo transceptor, ambos con el chip NRF24L01. Este chip utiliza la banda de 2,4 GHz y puede operar con velocidades de transmisión de […]
Control de relés con control remoto IR: Domótica (3) En este ejemplo probamos el sistema de encendido de lámparas y equipos eléctricos conectados al voltaje de red manejado por un control remoto estándar (IR = Infrarrojo). Los comandaremos con teclas elegidas del control remoto, que primero identificaremos con un simple programa en Arduino. Si usted desea leer con más detalle sobre control remoto con […]
Control con relés por interfaz serie: Domótica (2) En este ejemplo haremos un sistema de encendido lámparas y equipos eléctricos que funcionan con voltaje de red, y los controlaremos con caracteres enviados por línea serie a través del Monitor Serie del IDE de Arduino. Si usted desea leer con más detalle sobre la comunicación serie, le recomendamos el artículo ¿Qué es la comunicación […]
Módulos de relé y Arduino: Domótica (1) En este artículo ofrecemos información para controlar dispositivos que funcionan con el voltaje de red usando un módulo de relé. Al final de este trabajo usted debería poder controlar cualquier dispositivo eléctrico con un microcontrolador como el Arduino. Módulo de dos relés Un relé es un interruptor mecánico operado eléctricamente que se puede encender o […]
Pez robot se mueve alimentado con “sangre” falsa La historia comienza a centenares de metros de altura con las aves migratorias, y termina con un pez robótico nadando en el agua debajo. Para prepararse para sus viajes, las aves engordan mucho, hasta casi duplicar su peso, lo que las convierte en baterías emplumadas. Queman esa reserva de energía para impulsar sus alas a […]
Scratch SCRATCH es un lenguaje de programación desarrollado por MIT diseñado para niños con el objetivo de enseñarles conceptos de programación a una edad temprana y que así puedan desarrollar sus habilidades creativas, inventando sus propias historias, animaciones, música, juegos y más. Lo más divertido de aprender programación con SCRATCH, es que no es necesario que […]
Saber Más Programación Arduino 2019 Día 1: “Presentación del Curso” Saber más: Capítulos Vistos Día 1: Presentación del curso Día 2: “” Saber más: Capítulos Vistos Día 2: Día 3: “” Saber más: Capítulos Vistos Día 3: Día 4: “” Saber más: Capítulos Vistos Día 4: Día 5: “” Saber más: Capítulos Vistos Día 5: Día 6: “” Saber más: […]
Propuestas de Proyecto Final de Curso 2019 Criterios de Evaluación Proyectos Arduino Cada apartado se puntúa 0 o 1, siendo un total de 12 puntos el máximo a obtener. Se considera apto obtener un 5. Documentación Código Complejidad Proyecto Análisis Previo Diagrama de Flujo Esquemático, Materiales y Coste Pasos y Desarrollo del Proyecto Uso del control de versiones Funcionalidad Pruebas Realizadas Mejoras […]
Criterios de Evaluación de los Proyectos Criterios de Evaluación/Partes del proyecto a valorar: Complejidad del proyecto Calidad de la documentación Calidad del código Motivación y descripción de proyecto. Análisis previo de necesidades Justificación de la elección de la placa, medio de comunicación, componentes, sensores, etc… Justificación de la elección de librerías y documentación de su uso Justificación del software usado Diagrama […]
Plataformas de Publicación de Proyectos Arduino Webs con proyectos Arduino de todo tipo: Instructables: http://www.instructables.com/ Project Hub: https://create.arduino.cc/projecthub Makers: http://www.arduino.org/makers Hackaday: http://hackaday.com/ Hackster: https://www.hackster.io/ Make: http://makezine.com/ Blog Adafruit: https://blog.adafruit.com/ DIYnot: http://www.diynot.com/ ehow: http://www.ehow.com/ Tutorial de Arduino Project Hub: https://www.hackster.io/Arduino_Genuino/how-to-submit-content-on-arduino-project-hub-cf2177
Planificación y Diseño de Proyectos con Arduino Cuando nos planteamos un nuevo proyecto con Arduino, es aconsejable seguir una serie de pasos para conseguir el éxito. Además de los pasos que habría que seguir en cualquier proyecto, es muy importante hacer una buena planificación antes de empezar a comprar los elementos y ponernos a programar. Analizar los requisitos de nuestro proyecto, obtenido […]
España es el segundo país del mundo que más investiga sobre pensamiento computacional Un nuevo estudio publicado por investigadores de la Universidad de Estocolmo señala a España como el segundo país del mundo en el que más se investiga acerca del pensamiento computacional. En concreto, se trata de una revisión de la literatura científica en la que se han estudiado en detalle las publicaciones científicas empíricas que tratan [...]
Así fue el curso de verano “Pensamiento computacional e Inteligencia artificial” (y III) En esta entrada os contamos cómo transcurrió la quinta y última jornada de la fase presencial del curso"Pensamiento computacional e inteligencia artificial: de cero a cien en un verano", organizado por el Ministerio de Educación y Formación Profesional, a través del INTEF, en colaboración con la Universidad Internacional Menéndez Pelayo. Puedes leer acerca de las [...]
Así fue el curso de verano “Pensamiento computacional e Inteligencia artificial” (II) En esta entrada os contamos cómo transcurrieron las jornadas tercera y cuarta del curso"Pensamiento computacional e inteligencia artificial: de cero a cien en un verano", organizado por el Ministerio de Educación y Formación Profesional, a través del INTEF, en colaboración con la Universidad Internacional Menéndez Pelayo. Tras las dos primeras jornadas, que ya os resumimos [...]
Así fue el curso de verano “Pensamiento computacional e Inteligencia artificial” (I) Un año más, y ya van cinco, el equipo de Programamos ha tenido el placer de dirigir uno de los cursos de verano que organiza el Ministerio de Educación y Formación Profesional, a través del INTEF, en colaboración con la Universidad Internacional Menéndez Pelayo. En concreto, hemos participado en el curso "Pensamiento computacional e inteligencia [...]
Ubuntu Blog: Linting ROS 2 Packages with mypy One of the most common complaints from developers moving into large Python codebases is the difficulty in figuring out type information, and the ease by which type mismatch errors can appear at runtime.
Python 3.5 added support for a type annotation system, described in PEP 484. Python 3.6+ expands this with individual variable annotations (PEP 526). While purely decorative and optional, a tool like mypy can use it to perform static type analysis and catch errors, just like compilers and linters for statically typed languages.
There are limitations to mypy, however. It only knows what it’s explicitly told. Functions and classes without annotations are by default not checked, though they can be configured to default to Any or raise mypy errors.
The ROS 2 build farm is essentially only set up to run colcon test. As a result, any contributor wishing to use mypy currently needs to do so manually and hope that no other changes were made by someone not using annotations, or incorrectly annotating their code. This leads to many packages that are partially annotated, or with incorrect annotations ignored when by falling back to Any.
Seeking a fix that 1) helped us remember to check our contributions and 2) maintains a guarantee that packages that are annotated correctly stay so, we created a mypy linter for ament that can be integrated with the rest of the package test suite, allowing for mypy to be run automatically in the ROS 2 build farm and as part of the CI process. Now we can guarantee type correctness in our python code, and avoid the dreaded type mismatch errors!
ament_lint in action
The ament_lint metapackage defines many common linters that can integrate into the build/test pipeline for ROS 2. The package ament_mypy within handles mypy integration.
To add it as a test within your test suite, you’ll need to make a few changes to your package:
Add ament_mypy as a test dependency in your package.xmlAdd pytest as a test requirement in setup.pyWrite a test case that invokes ament_mypy and fails accordinglyAdd ament_mypy as a testing requirement to CMakeLists.txt, if using CMakepackage.xml
For the first, find the section of your package.xml after the name/author/license information, where the dependencies are declared. Alongside the other depend blocks, add an entry
For setup.py, add the keyword argument
if its not already present.
Finally, we add a file test/test_mypy.py, that contains a call to ament_mypy.main()
from ament_mypy.main import main
rc = main()
assert rc == 0, 'Found code style errors / warnings'
If ament_mypy.main() returns non-zero, our test will fail and the error messages will display.
For configuring CMake, there are two options: manually list out each individual linter and run them, or use the ament_lint_auto convenience package to run all ament_lint dependencies.
In either case, package.xml needs to be configured as above, with an additional dependency of
To manually add ament_mypy, add the following code to your CMakeLists.txt file:
To use ament_lint_auto, add it as a test dependency to package.xml
And add the following to CMakeLists.txt, before the ament_package() call
# this must happen before the invocation of ament_package()
(Optional) Configuring mypy
To pass custom configurations to mypy, you can specify a ‘.ini’ configuration file (documented here) in the arguments to main.
Create a config directory under test, and a mypy.ini file within. Fill the file with your custom configuration, e.g.:
# Global options:
python_version = 3.5
warn_return_any = True
warn_unused_configs = True
# Per-module options:
disallow_untyped_defs = True
warn_return_any = False
ignore_missing_imports = True
In setup.py, pass in the --config option with the path to your desired file.
from pathlib import Path
from ament_mypy.main import main
config_path = Path(__file__).parent / 'config' / 'mypy.ini'
rc = main(argv=['--exclude', 'test', '--config', str(config_path.resolve())])
assert rc == 0, 'Found code style errors / warnings'
When using CMake, you’ll need to pass the CONFIG_FILE arg. In the manual invocation example, that means changing the BUILD_TESTING block as follows (assuming your mypy.ini file is in the same directory as above):
The additional argument means ament_cmake_mypy cannot be auto invoked by ament_lint_auto. If you’re already using ament_lint_auto for other packages, you’ll need to exclude ament_mypy.
To exclude ament_cmake_mypy, set the AMENT_LINT_AUTO_EXCLUDE variable and then manually find and invoke it:
# this must happen before the invocation of ament_package()
Running the Test
To run the test and get output to the console, run the following in your workspace:
colcon test -event-handlers console_direct+
To test only your package:
colcon test --packages-select <YOUR_PACKAGE> --event-handlers console_direct+
The post Linting ROS 2 Packages with mypy appeared first on Ubuntu Blog.
Ubuntu Podcast from the UK LoCo: S12E19 – Starglider This week we’ve been fixing floors and playing with the new portal HTML element. We round up the Ubuntu community news including the release of 18.04.3 with a new hardware enablement stack, better desktop integration for Livepatch and improvements in accessing the latest Nvidia drivers. We also have our favourite picks from the general tech news.
It’s Season 12 Episode 19 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Stuart Langridge are connected and speaking to your brain.
In this week’s show:
We discuss what we’ve been up to recently:
Mark has been fixing his floor.
Stuart as been playing with the new portal HTML element.
We discuss the community news:
Ubuntu 18.04.3 LTS is released
Desktop integration for Livepatch
KDE has removed a feature from .desktop files which posed a security risk
Improvements to Additional Driver options on the way
Ubuntu 19.10 will have an experimental “ZFS on root” option
We mention some events:
The Linux Application Summit is coming to Barcelona in November: 12th to 15th November – Barcelona, Spain.
Open EdTech Global Festival 2019: 21 to 22 of November 2019 – Barcelona, Spain.
We discuss the news:
Man gets “NULL” license plate with unexpected consequences
Rust developers Facepunch announced the future of the game on Linux
Facial recognition in use around King’s Cross Station
kill -9 Linux Journal
Image taken from Starglider published in 1986 for the Amiga by Rainbird.
That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to firstname.lastname@example.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.
Join us in the Ubuntu Podcast Telegram group.
Julian Andres Klode: APT Patterns If you have ever used aptitude a bit more extensively on the command-line, you’ll probably have come across its patterns. This week I spent some time implementing (some) patterns for apt, so you do not need aptitude for that, and I want to let you in on the details of this merge request !74.
so, what are patterns?
Patterns allow you to specify complex search queries to select the packages you want to install/show.
For example, the pattern ?garbage can be used to find all packages that have been automatically installed but are no longer depended upon by manually installed packages.
Or the pattern ?automatic allows you find all automatically installed packages.
You can combine patterns into more complex ones; for example, ?and(?automatic,?obsolete) matches all automatically installed packages that do not exist any longer in a repository.
There are also explicit targets, so you can perform queries like ?for x: ?depends(?recommends(x)):
Find all packages x that depend on another package that recommends x.
I do not fully comprehend those yet - I did not manage to create a pattern that matches all manually installed packages that a meta-package depends upon. I am not sure it is possible.
reducing pattern syntax
aptitude’s syntax for patterns is quite context-sensitive. If you have a pattern ?foo(?bar) it can have two possible meanings:
If ?foo takes arguments (like ?depends did), then ?bar is the argument.
Otherwise, ?foo(?bar) is equivalent to ?foo?bar which is short for ?and(?foo,?bar)
I find that very confusing.
So, when looking at implementing patterns in APT, I went for a different approach.
I first parse the pattern into a generic parse tree, without knowing anything about the semantics, and then I convert the parse tree into a APT::CacheFilter::Matcher, an object that can match against packages.
This is useful, because the syntactic structure of the pattern can be seen, without having to know which patterns have arguments and which do not - basically, for the parser ?foo and ?foo() are the same thing.
That said, the second pass knows whether a pattern accepts arguments or not and insists on you adding them if required and not having them if it does not accept any, to prevent you from confusing yourself.
aptitude also supports shortcuts. For example, you could write ~c instead of config-files, or ~m for automatic; then combine them like ~m~c instead of using ?and. I have not implemented these short patterns for now, focusing instead on getting the basic functionality working.
So in our example ?foo(?bar) above, we can immediately dismiss parsing that as ?foo?bar:
we do not support concatenation instead of ?and.
we automatically parse ( as the argument list, no matter whether ?foo supports arguments or not
apt not understanding invalid patterns
At the moment, APT supports two kinds of patterns: Basic logic ones like ?and, and patterns that apply to an entire package as opposed to a specific version.
This was done as a starting point for the merge, patterns for versions will come in the next round.
We also do not have any support for explicit search targets such as ?for x: ... yet - as explained, I do not yet fully understand them, and hence do not want to commit on them.
The full list of the first round of patterns is below, helpfully converted from the apt-patterns(7) docbook to markdown by pandoc.
These patterns provide the basic means to combine other patterns into
more complex expressions, as well as ?true and ?false patterns.
?and(PATTERN, PATTERN, ...)
Selects objects where all specified patterns match.
Selects objects where PATTERN does not match.
?or(PATTERN, PATTERN, ...)
Selects objects where at least one of the specified patterns match.
Selects all objects.
These patterns select specific packages.
Selects packages matching the specified architecture, which may
contain wildcards using any.
Selects packages that were installed automatically.
Selects packages that have broken dependencies.
Selects packages that are not fully installed, but have solely
residual configuration files left.
Selects packages that have Essential: yes set in their control file.
Selects packages with the exact specified name.
Selects packages that can be removed automatically.
Selects packages that are currently installed.
Selects packages where the name matches the given regular
Selects packages that no longer exist in repositories.
Selects packages that can be upgraded (have a newer candidate).
Selects all virtual packages; that is packages without a version.
These exist when they are referenced somewhere in the archive, for
example because something depends on that name.
apt remove ?garbage
Remove all packages that are automatically installed and no longer
needed - same as apt autoremove
apt purge ?config-files
Purge all packages that only have configuration files left
Some things are not yet where I want them:
?architecture does not support all, native, or same
?installed should match only the installed version of the package, not the entire package (that is what aptitude does, and it’s a bit surprising that ?installed implies a version and ?upgradable does not)
Of course, I do want to add support for the missing version patterns and explicit search patterns. I might even add support for some of the short patterns, but no promises. Some of those explicit search patterns might have slightly different syntax, e.g. ?for(x, y) instead of ?for x: y in order to make the language more uniform and easier to parse.
Another thing I want to do ASAP is to disable fallback to regular expressions when specifying package names on the command-line: apt install g++ should always look for a package called g++, and not for any package containing g (g++ being a valid regex) when there is no g++ package. I think continuing to allow regular expressions if they start with ^ or end with $ is fine - that prevents any overlap with package names, and would avoid breaking most stuff.
There also is the fallback to fnmatch(): Currently, if apt cannot find a package with the specified name using the exact name or the regex, it would fall back to interpreting the argument as a glob(7) pattern. For example, apt install apt* would fallback to installing every package starting with apt if there is no package matching that as a regular expression. We can actually keep those in place, as the glob(7) syntax does not overlap with valid package names.
Maybe I should allow using  instead of () so larger patterns become more readable, and/or some support for comments.
There are also plans for AppStream based patterns. This would allow you to use apt install ?provides-mimetype(text/xml) or apt install ?provides-lib(libfoo.so.2). It’s not entirely clear how to package this though, we probably don’t want to have libapt-pkg depend directly on libappstream.
Talk to me on IRC, comment on the Mastodon thread, or send me an email if there’s anything you think I’m missing or should be looking at.
Ubuntu Blog: 8 Ways Snaps are Different Depending on the audience, the discussion of software packaging elicits very different responses. Users generally don’t care how software is packaged, so long as it works. Developers typically want software packaging as a task to not burden them and just magically happen. Snaps aren’t magic, but aim to achieve both ease of maintenance and transparency in use.
Most software packaging systems differ only a little in file format, tools used in their creation and methods of discovery and delivery. Snaps come with a set of side benefits beyond just delivering bytes in a compressed file to users. In this article, we’ll cover just 8 of the ways in which snaps improve upon existing Linux software packaging.
Easy publishing on your timescales
Getting software in officially blessed Linux distribution archives can be hard. This is especially true where the software archives impose strict adherence to a set of distribution rules. For the leading Linux brands, there can be a lengthy delay between a request for inclusion, submission and the package landing in a stable release.
External repositories can be setup and hosted by software developers. However, these software archives are often difficult for users to discover, and not straightforward to enable, especially for novices. The developer has the added overhead for to maintain the repository.
On the other hand, snaps are published in a central store, which is easily updated and straightforward to search and install from. Within a single day (and often faster), a developer can go from snapcraft register to claim the name of their application to snapcraft push to upload, and snapcraft release to publish their application.
Developers can publish builds for multiple processor architectures at their own pace without having to wait for distribution maintainers to rebuild, review, sponsor and upload their packages. Developers are in control of the release cadence for their software.
With the best will in the world, most users don’t install software updates. Sure, that doesn’t mean you, dear reader. We’re confident you’re on top of apt upgrade, dnf update or pacman -Syyu each day, perhaps numerous times every day. A significant proportion of users do not update their systems regularly, though. Estimates place this anywhere between 40% and 70%. This can be even worse for unattended devices such as remote servers, or Raspberry Pis tucked away running an appliance
Modern Linux distributions have sought to mitigate this with background tasks to automate critical security updates, or graphical notifications to remind users. However, many users switch these off, or simply ignore the notification, leaving themselves at risk.
Snaps solves this by enabling automatic updates by default on all installations. When the developer publishes a new release of software to the store, they can be confident that users will automatically get those updates soon after. By default, the snapd daemon will check in for updates in the store multiple times a day
However, some users do not wish to have their software updated immediately. Perhaps they’re giving a presentation and want to use the current version they’ve prepared for, or maybe their Internet connectivity or allowance is limited. Snaps enable users to control when updates are delivered. Users can postpone them to update outside the working day, overnight, or later in the month
Users are also able to snap refresh to force all snaps to update, or individually with snap refresh (snapname). Auto refreshing ensures users get the latest security updates, bug fixes and feature improvements, while still retaining control where required.
One package for everyone
It’s commonly known that there are (probably) more Linux distributions than there are species of beetle on Planet Earth. Upon releasing a package in one format, users of all other distros will rise up and demand packages for their specific spin of Linux. For each additional package to be created, published and maintained, there is extra work for the developer(s). There’s a diminishing return on investment for every additional packaging format supported.
With one snap package, a developer can hit a significant proportion of users across more than 40 distributions, saving time on packaging, QA, and release activities. The same snap will work on Arch Linux, Debian, Ubuntu, Fedora and numerous other distributions built upon those bases, such as Manjaro, Linux Mint, elementary OS and CentOS. While not every distribution is covered, a large section of the Linux-using community is catered to with snaps.
Changing channels, tracks and branches
When publishing software in traditional Linux distribution repositories, usually there is only one supported version or release of that software available at a time. While distributions may have separate ‘stable’, ‘testing’ and ‘unstable’ branches, these are typically entire repositories.
As a result, it’s not usually straightforward or even possible to granularly populate the individual release with multiple versions of the same application. Moreover, the overhead of maintaining one package in those standard repositories is enough that uploading multiple versions would be unnecessarily onerous.
Usually a developer will build beta releases of their software for experts, QA testing or enthusiasts to try out before a release candidate is published ahead of a stable release. As Linux distributions don’t easily support multiple releases of the same application in their repository, the developer has to maintain separate packages out of band. Maintaining these repositories of beta, candidate and stable releases is further overhead.
The Snap Store has this built in for all the snaps. By default there are four risk levels called ‘channels‘ named ‘stable’, ‘candidate’, ‘beta’ and ‘edge’. Developers can optionally publish different builds of the same application to those channels. For example, the VLC developers use the ‘stable’ channel for their final releases and the ‘edge’ channel for daily builds, directly from their continuous integration system
Users may install the stable release, but upon hearing of new features in the upcoming beta may choose to snap refresh (snapname) --channel=beta to test them out. They can later snap refresh (snapname) --channel=stable to revert back to the stable channel. Users can elect to stick to a particular risk level they’re happy with on a per-application basis. They don’t need to update their entire OS to get the ‘testing’ builds of software, and don’t have to opt-in en-masse for all applications either
Furthermore, the Snap Store supports tracks, which enables developers to publish multiple supported release streams for their application in the store. By default, there is only one implied track – ‘latest’, but developers may request additional tracks for each supported release. For example, at the time of writing, the node snap contains separate tracks for Node 6, 8, 9, 10, 11 and 12 and the default ‘latest’ track which contains Node 13 nightly builds
Branches are useful for developers to push short-lived ‘hidden’ builds of their software. This can often be useful when users report a bug with the software, and the developer wishes to produce a temporary test build specifically for that user, and anyone else affected by the bug. The developer can snapcraft push (snapname) --release=candidate/fix-1234 to push a candidate build to the fix-1234 branch.
Delta uploads and downloads
With most of the traditional Linux packaging systems when an update gets published, all users get the entire package every time. As a result, when a new update to a large package is released, there’s a significant download for every user. This places a load – and cost – on the host of the repository, and time and bandwidth on that of the user.
The Snap Store supports delta updates for both uploads and downloads. The snapcraft tool used for publishing snaps to the Snap Store will determine if it’s more efficient to upload a full snap or a delta each time. Similarly the snapd daemon, in conjunction with the Snap Store, will calculate whether it’s better to download a delta or the full size snap. Users do not need to specify, this is automatic
Snapshot on removal
Traditional Linux packaging systems don’t associate data with applications directly. When a software package is installed, it may create databases, configuration files and application data in various parts of the filesystem. Upon application removal, this data is usually left behind, all over the filesystem. It’s an exercise for the user or system administrator to clean up after software is removed.
Snaps seek to solve this as part of the application confinement. When a snap is installed, it has access to a set of directories in which configuration and application data may be stored. When the snap is removed, the associated data from those directories is also removed. This ensures snaps can be atomically added and removed, leaving the system in a consistent state afterwards.
Starting in snapd 2.37, it’s possible to take a snapshot of application data prior to removal. The snap save (snapname) command will create a compressed snapshot of the application data in /var/lib/snapd/snapshots. The list of saved snapshots can be seen with snap saved and can be restored via snap restore (snapshot). Snapshots can be removed with snap forget (snapshot) to reclaim disk space
In addition, starting in snapd 2.39, an automatic snapshot is taken whenever a snap is removed from the system. These snapshots are kept for 31 days by default. The retention period may be configured as low as 24 hours, or raised to a longer duration. Alternatively, the snapshot feature can be disabled completely
Existing packaging systems on Linux don’t cater well to having multiple versions of the same application installed at once. In some cases, developers are well catered for. Multiple versions of a small selection of tools are available such as gcc-6 and gcc-7 in the repositories, which can be installed simultaneously. However, this is only for specific packages. It’s not a universal constant however, that any packaged application could be installed multiple times with different versions
Snaps solve this with an experimental parallel install feature. Users can install multiple versions of the same snap side-by-side. Each can be given its own ‘instance key’ – which is a unique name to refer to the install. They can then choose which instance to launch, or indeed launch both. For example, a user may want both the stable and daily builds of VLC installed at once, to allow them to test both upcoming features while still being able to play videos on the stable release when the daily build is unstable.
Traditional software packaging is typically combined with a graphical package manager to make it easier for users to install software. For a long time, many of these graphical tools have languished in design and features, serving as a predominantly technical frontend to the underlying console package management tools.
For some years developers have been publishing their software in external repositories, PPAs, in GitHub releases pages or their own website download page.
The default tools don’t expose applications that aren’t part of the default repositories. While some have had visual refreshes and featured updates, they still don’t enable users to discover brand new software hosted externally. This makes it difficult for developers to get their software in front of modern Linux distribution users.
The Snap Store solves this in multiple ways. The graphical desktop package managers GNOME Software and KDE Discover both feature plugins that can search the Snap Store. Moreover, a web frontend to the Snap Store enables users to browse and search for new applications by category or publisher
Making it easier to publish software in the Snap Store means that delivering a snap can become part of the standard release process for applications. Once developers publish their snap to the Snap Store, it’s immediately visible to users both in the graphical storefronts and on the web.
Developers can link directly to their Snap Store page as a permanent storefront for their application. The storefront pages show screenshots, videos, descriptions along with currently published versions and details of how to install the application. The store features buttons and cards, which can be embedded in pages and blog posts to promote the snap. Users are able to share these pages with friends and colleagues who may appreciate the application, which will drive other users to these snaps.
Furthermore, the Snap Advocacy team regularly highlight new applications on social media and via blog posts to draw user attention to appealing, up-to-date and useful new software. The team also regularly updates the list of ‘Featured Apps’ presented in both the graphical desktop package managers, and on the front page of the Snap Store web frontend.
Developers are encouraged to ensure their store page looks great with screenshots, videos, a rich application description along with links to support avenues. Application publishers can reach the Snap Advocacy team via the snapcraft forum to request their app is included in a future social media or blog update, or to be considered for inclusion as a featured entry in the Snap Store.
In this article I picked eight of the features that set snapcraft, snap and the Snap Store apart from other traditional and contemporary packaging systems. For many people a lot of the technical details of software packaging and delivery are of little interest. What most people care about is getting fresh software with security updates, in a timely fashion. That’s exactly what snaps aim to do. The rest is icing on the cake.
As always we welcome comments and suggestions on our friendly forum. Until next time.
Photo by Samuel Zeller on Unsplash
The post 8 Ways Snaps are Different appeared first on Ubuntu Blog.
Stephen Michael Kellat: Splash Two Well, I just finished up closing out the remaining account that
I had on Tumblr. I hadn't touched it for a while. The
property just got sold again and is being treated like nuclear
waste. I did export my data and somehow had a two gigabyte export.
I didn't realize I used it that much.
My profile on Instagram was nuked as well. As things keep
sprouting the suffix of "--by Facebook" I can merrily shut down
those profiles and accounts. That misbehaving batch of algorithms
mischaracterizes me 85% of the time and I get tired of dealing with
such messes. The accretions of outright non-sensical weirdness in
Facebook's "Ad Interests" for me get frankly quite disturbing.
Remember, you should take the time to close out logins and
accounts you don't use. Zombie accounts help nobody.
Education International es la federación de organizaciones que representan a más de 30 millones de profesores y otros trabajadores de la educación, a través de más de 400 organizaciones miembros en más de 170 países y territorios.