SAP creates contingent employees market as COVID-19 response

SAP creates contingent employees market as COVID-19 response:

SAP Fieldglass has created a marketplace for finding contingent employees. It was created in response to COVID-19, and will operate through the end of the year.

SAP Fieldglass has launched External Talent Marketplace, designed to connect businesses with staffing agencies. Some of the largest staffing firms, including ManpowerGroup Global, Randstad USA and Allegis Group, are part of it.
The talent marketplace is based on the idea that businesses will move cautiously as the economy recovers, hiring contingent employees instead of full-time employees. Fieldglass makes a platform to manage a contingent workforce.
External Talent Marketplace, which is free for both users of the platform and staffing firms, was created in response to the coronavirus.
The marketplace has a time limit. SAP’s plan now is to close it at year’s end, unless there is a need to continue it.
But so far, there is no massive shift to contingent hiring. ManpowerGroup, one of the world’s largest staffing firms, reported on Monday second-quarter revenues of $3.7 billion, a 30% decline from the prior year period.
Demand for contingent employees is picking up, but it may have been dampened by government-supported paid furloughs for employees of businesses, according to Jonas Prising, chairman and CEO of ManpowerGroup, in a call with investors.
That will change, he said. 
“As the healthcare crisis morphs into an economic crisis,” the demand for staffing services will be similar to what happens in a cyclical downturn, Prising said. The furlough programs will end, and “we expect to see the increase in demand that we typically would see when you come out of a recession,” he said.
Prising added that he believes the pandemic will accelerate the movement toward a more skilled workforce. 
The pandemic is prompting firms to rely on more technology, such as in remote work support, he said. “You very quickly have to shift how you do business, which also requires a different skill set,” Prising explained, adding that ManpowerGroup has increased its focus on technology services.

Cost-cutting motivations

While businesses have reduced contractor budgets in response to the pandemic, Gartner estimates that 32% of organizations globally are replacing full-time employees with contingent workers to reduce costs, according to a recent survey of 4,500 managers worldwide. Gartner doesn’t have a comparative pre-coronavirus percentage that looks specifically at the use of contingent employees for cost reduction.
Emily Rose McRae, a director in the Gartner HR practice, said some firms that were initially planning to hire full-time workers before the coronavirus pandemic are shifting to contingent and breaking work down into projects. “Not only is the time to hire a lot quicker, but they also tend to be able to get started really fast,” she said.
The platform will enable hiring managers to come in and search or engage a vetted pool of candidates in a very simple way.
Arun SrinivasanGeneral Manager, SAP Fieldglass
McRae added that while she believes this pandemic-induced shift to contingent workers will be temporary, it may not be for all organizations. Some businesses may make their increased use of contingent workers permanent.
In the SAP Fieldglass market, a business can create a job posting, which will get automatically routed to the appropriate staffing firms or they can directly search the contingent worker pool uploaded by the vendors.
There is a template on the marketplace that businesses can use to help make the request more structured and precise, but customers don’t have to use it, said Arun Srinivasan, general manager of SAP Fieldglass. The platform will enable hiring managers “to come in and search or engage a vetted pool of candidates in a very simple way,” he said.

from Tumblr

Russia Report reveals long-running cyber warfare campaign against UK

Russia Report reveals long-running cyber warfare campaign against UK:

Russia has been hacking the UK for years and the British government has also known about it for years, according to the Intelligence and Security Committee’s report

Russia is a malicious and “highly capable” threat actor that employs organised cyber criminal gangs to supplement its own skills and carries out malicious cyber activity on a global scale to assert itself aggressively, and interfere in the affairs of other countries.

Security Committee’s report

Russia is a malicious and “highly capable” threat actor that employs organised cyber criminal gangs to supplement its own skills and carries out malicious cyber activity on a global scale to assert itself aggressively, and interfere in the affairs of other countries.


It poses an immediate threat to the national security of the UK, and the intelligence community is failing to properly coordinate its response.
This is the judgment of the Intelligence and Security Committee (ISC), which under new leadership published the long-awaited Russia report on 21 July 2020, which despite having been ready for publication for months, has been repeatedly suppressed by Boris Johnson’s Conservative government.
The report reveals how Russia has conducted malicious cyber activity to influence democratic elections and undertake pre-positioning activity on critical national infrastructure (CNI) – in the course of giving evidence, the National Cyber Security Centre (NCSC) revealed there was Russian cyber intrusion into the UK’s CNI, although which sectors have been targeted is redacted.
It shows how Russian GRU intelligence agents conducted orchestrated phishing attacks on the UK government, in particular against the Foreign and Commonwealth Office (FCO) and the Defence Science and Technology Laboratory (DTSL) during the investigation into the Salisbury biological terror attacks.
The report also sheds light on how Russia has employed organised cyber criminal gangs, which MI6 has assessed “comes to the very muddy nexus between business and corruption and state power in Russia”. In the course of giving evidence, GCHQ told the committee there was a “considerable balance” of intelligence that shows links between serious and organised crime and Russian state activity, and described this as something of a symbiotic relationship.
Moreover, the report confirms that the UK government has known about the extent of Russian cyber activity in the UK for years, but has been too reluctant to point the finger at Moscow.
“Russia’s promotion of disinformation and attempts at political influence overseas – whether through the use of social media, hack and leak operations, or its state-owned traditional media – have been widely reported… The UK is clearly a target and must equip itself to counter such efforts,” said the committee in a lengthy press statement.
However, said the committee, the inquiry found it hard to establish who was responsible for defending the UK’s democratic processes against cyber attacks, branding it “something of a hot potato”. While it conceded there was naturally nervousness around any suggestion that the intelligence services might be inclined to get involved in the nitty-gritty of the democratic process, this did not apply when it came to protecting such processes. It questioned in particular whether DCMS and the Electoral Commission were really up to the job of tackling a major hostile state threat.
“Democracy is intrinsic to our country’s success and well-being. Protecting it must be a ministerial priority, with the Office for Security and Counter-Terrorism taking the policy lead and the operational role sitting with MI5,” said the committee.
The committee also blasted digital and social media platforms for failing to step up and take some responsibility. “The government must establish a protocol with these companies to ensure that they take covert hostile state use of their platforms seriously, with agreed deadlines within which such material will be removed, and government should ‘name and shame’ those which fail to act,” it said.
“We do however welcome the government’s increasingly assertive approach when it comes to identifying, and laying blame on, the perpetrators of cyber attacks, and the UK should encourage other countries to adopt a similar approach to ‘naming and shaming’.
“The same is true in relation to an international doctrine on the use of offensive cyber: this is now essential and the UK – as a leading proponent of the rules-based international order – should be promoting and shaping rules of engagement, working with our allies,” it added.
Ray Walsh, digital privacy advocate at ProPrivacy, said: “The Russia report finally published today by the UK government confirms what cyber security experts have been calling attention to for many years – that the Russian government and its state-employed hackers are engaging in active cyber warfare against the West, which includes phishing attempts against government agencies, the deployment of covert exploits designed to steal top-secret information, and activities designed to influence the democratic elections of other nations.
“Perhaps most damningly for the UK government is that the report reveals that the UK has been aware of Russia’s ongoing cyber warfare for around four whole years. Back in 2016, the committee recommended that the UK government should leverage its diplomatic relationships to openly begin assigning blame to Russian cyber attacks and to gain support from the international community in finding ways to retaliate against or prevent those malicious practices.”
Walsh said the acknowledgement that Russia had been attempting to influence elections and the action of the UK government in suppressing the report for nine months may well cause people to question the legitimacy of the results of UK elections held in the past few years, including the Brexit referendum of June 2016.
“Cyber security firms have been detailing the nefarious activities and attack vectors of Russian state-sponsored hackers such as Fancy Bear, APT28, Pawn Storm, Sofacy, Sednit, Tsar Team, and Strontium for many years, but this is the first time that the UK government has formally acknowledged that those malicious state-sponsored actors have been directing their efforts directly at UK elections and government agencies,” said Walsh.
“Now that the UK has attributed blame, it will be interesting to see how exactly the government proceeds and what it can do to prevent those activities and produce actual changes in light of the findings,” he said.

Read more about cyber warfare

  • The future of cyber warfare places enterprise security and survivability in the crosshairs. Learn more about cyber warfare threats and capabilities and how infosec can prepare.
  • A retired US Navy cryptologist implores enterprises to build key cyber warfare laws into their infosec strategy to improve survivability on the digital battleground in his new book.
  • On a cold afternoon in Finland, F-Secure’s Mikko Hypponen discusses cyber weapons and nation state threats, and explains why arms limitations treaties might one day expand to include malware and other threats.

from Tumblr

10 Awesome JavaScript Libraries You Should Try Out in 2020

10 Awesome JavaScript Libraries You Should Try Out in 2020:

10 Awesome JavaScript Libraries You Should Try Out in 2020
JavaScript is one of the most popular languages on the web. Even though it was initially developed just for web pages, it has seen exponential growth in the past two decades.
Now, JavaScript is capable of doing almost anything and works on several platforms and devices including IoT. And with the recent SpaceX Dragon launch, JavaScript is even in space.
One of the reasons for its popularity is the availability of a large number of frameworks and libraries. They make development much easier compared to traditional Vanilla JS development.
There are libraries for almost anything and more are coming out almost every day. But with so many libraries to choose from it becomes difficult to keep a track of each one and how it might be tailored specifically to your needs.
In this article, we will discuss 10 of the most popular JS libraries which you can use to build your next project.


I think Leaflet is the best open source library for adding mobile-friendly interactive maps to your application.
Its small size (39kB) makes it a great alternative to consider over other map libraries. With cross-platform efficiency and a well-documented API, it has everything you need to make you fall in love.
Here is some sample code that creates a Leaflet map:
var map = new L.Map("map", {
    center: new L.LatLng(40.7401, -73.9891),
    zoom: 12,
    layers: new L.TileLayer("{z}/{x}/{y}.png")
In Leaflet, we need to provide a tile layer since there isn’t one by default. But that also means that can choose from a wide range of layers both free and premium. You can explore various free tile layers here.
Read the Docs or follow the Tutorials to learn more.


This open-source library helps you create full-screen scrolling websites as you can see in the above GIF. It’s easy to use and has many options to customize, so it’s no surprise it is used by thousands of developers and has over 30k stars on GitHub.
Here is a Codepen demo that you can play with:
You can even use it with popular frameworks such as:
I came across this library about a year ago and since then it has become one of my favorites. This is one of the few libraries that you can use in almost every project. If you haven’t already started using it then just try it, you will not be disappointed.



One of the best animation libraries out there, Anime.js is flexible and simple to use. It is the perfect tool to help you add some really cool animation to your project.
Anime.js works well with CSS properties, SVG, DOM attributes, and JavaScript Objects and can be easily integrated into your applications.
As a developer it’s important to have a good portfolio. The first impression people have of your portfolio helps decide whether they will hire you or not. And what better tool than this library to bring life to your portfolio. It will not only enhance your website but will help showcase actual skills.
Check out this Codepen to learn more:
You can also take a look at all the other cool projects on Codepen or Read the Docs here.


I came across this library while searching for a way to implement a full-screen feature in my project.
If you also want to have a full-screen feature, I would recommend using this library instead of Fullscreen API because of its cross-browser efficiency (although it is built on top of that).
It is so small that you won’t even notice it – just about 0.7kB gzipped.
Try the Demo or read the Docs to learn more.


Working with date and time can be a huge pain, especially with API calls, different Time Zones, local languages, and so on. Moment.js can help you solve all those issues whether it is manipulating, validating, parsing, or formatting dates or time.
There are so many cool methods that are really useful for your projects. For example, I used the .fromNow() method in one of my blog projects to show the time the article was published.
const moment = require('moment'); 

relativeTimeOfPost = moment([2019, 07, 13]).fromNow(); 
// a year ago

Although I don’t use it very often, I am a fan of its support for internationalization. For example, we can customize the above result using the .locale() method.
// French
relativeTimeOfPostInFrench = moment([2019, 07, 13]).fromNow(); 
//il y a un an

// Spanish
relativeTimeOfPostInSpanish = moment([2019, 07, 13]).fromNow(); 
//hace un año
Moment.js Homepage
Read the Docs here.


Hammer.js is a lightweight JavaScript library that lets you add multi-touch gestures to your Web Apps.
I would recommend this library to add some fun to your components. Here is an example to play with. Just run the pen and tap or click on the grey div.
It can recognize gestures made by touch, mouse and pointerEvents. For jQuery users I would recommend using the jQuery plugin.
$(element).hammer(options).bind("pan", myPanHandler);
Read the Docs here.


Masonry is a JavaScript grid layout library. It is super awesome and I use it for many of my projects. It can take your simple grid elements and place them based on the available vertical space, sort of like how contractors fit stones or blocks into a wall.
You can use this library to show your projects in a different light. Use it with cards, images, modals, and so on.
Here is a simple example to show you the magic in action. Well, not magic exactly, but how the layout changes when you zoom in onthe web page.
And here is the code for the above:
var elem = document.querySelector('.grid');
var msnry = new Masonry( elem, {
  itemSelector: '.grid-item',
  columnWidth: 400

var msnry = new Masonry( '.grid');
Here is a cool demo on Codepen:
Check out these Projects


If you are a data-obsessed developer then this library is for you. I have yet to find a library that manipulates data as efficiently and beautifully as D3. With over 92k stars on GitHub, D3 is the favorite data visualization library of many developers.
I recently used D3 to visualize COVID-19 data with React and the Johns Hopkins CSSE Data Repository on GitHub. It I was a really interesting project, and if you are thinking of doing something similar, I would suggest giving D3.js a try.
Read more about it here.


Slick is fully responsive, swipe-enabled, infinite looping, and more. As mentioned on the homepage it truly is the last carousel you’ll ever need.
I have been using this library for quite a while, and it has saved me so much time. With just a few lines of code, you can add so many features to your carousel.
  slidesToShow: 3,
  slidesToScroll: 1,
  autoplay: true,
  autoplaySpeed: 2000,
Check out the demos here.


Popper.js is a lightweight ~3 kB JavaScript library with zero dependencies that provides a reliable and extensible positioning engine you can use to ensure all your popper elements are positioned in the right place.
It may not seem important to spend time configuring popper elements, but these little things are what make you stand out as a developer. And with such small size it doesn’t take up much space.
Read the Docs here.


As a developer, having and using the right JavaScript libraries is important. It will make you more productive and will make development much easier and faster. In the end, it is up to you which library to prefer based on your needs.
These are 10 JavaScript libraries that you can try and start using in your projects today. What other cool JavaScript libraries you use? Would you like another article like this? Tweet and let me know.

from Tumblr

An Introduction to Linux

An Introduction to Linux:

From smartphones to cars, supercomputers and home appliances, home desktops to enterprise servers, the Linux operating system is everywhere.
This phrase can be intimidating or challenging, but don’t let it make you afraid of learning more about this amazing operating system.

What is Linux

Linux is an Operating System like Windows, Mac OS, even Android which is powered by a Linux based image OS.
The OS is responsible to manage the hardware and software on a computer. It is composed by many pieces of software which I’ll try to explain here.
It all starts at the Bootloader…


The Bootloader layer is responsible to manage the boot process of your computer, the process of turning your computer on, load the peripheral drivers. For us, it appears only to be a simple splash screen blinking with an image at the corner.
It starts to work after the computer’s startup or boot process began with the BIOS (Basic Input / Output System) software on the motherboard. After the hardware initialization and checks were once done, the BIOS starts up the bootloader.
Just to make you familiar with some new names, the most famous bootloader is called GRUB.
Then it goes through the OS Kernel, but first, let me make some comparisons here:
Think on an OS like an engine, it is the part of a machine that makes the things running properly. If the engine is not working, the machine does not work well or even work.
The OS can be also compared to our brain, the device hardware is our members, but what makes our hands move is our brain. Ok, not exactly our brain but our nervous system. We can make another comparison here and our nervous system can be compared with the OS Kernel.


The Kernel is the part responsible to deal with every hardware component and make the communication between hardware and software.
The Kernel is responsible for memory, process, and file management.
Dealing with I/O (input/output) devices, the Kernel needs to understand if you are using a wired or wireless network connection, or if you are using a USB mouse or a touchpad device.
And how Kernel deals with RAM (Random-Access Memory)?
The RAM is used to store program instructions and data, like variables for example. Often multiple programs will try to access the memory, most of the time always demanding more memory than the device has available. The Kernel is responsible for deciding which piece of this memory those processes can use, and choosing what to do when no memory was left.

Init System

The Init System layer assumes the job of finishing the start of the computer after the hands-over with the Bootloader occurs.
It is the init system that manages the boot process, once the initial booting is handed over from the bootloader. A curious fact for those with are familiar with Linux, is that if you run a command at your terminal to see all processes running with ps aux, you’ll notice that the first process running is the init system with a PID (process id) of 1.


The Daemons are background services, they can start right after the boot process, or even when you log your user in at your laptop. It’s on this layer that you can choose what applications you want to be loaded with your computer or not. You can simply run a command to enable/unable a daemon to be started.
They manage many parts of the system on things like when you insert or remove a device, managing user login, or managing your internet connection with a system window where you can connect to a real wi-fi router filling the password for example.

Graphical Server

But this example of open a window pop-up asking for connects with my home’s wifi network wouldn’t be possible without the Graphical Server Layer.
The Graphical Server is known as X Server or simply X.
This layer is responsible for drawing and moving windows on our device and interact with our mouse and keyboard.

Desktop Environment

Very attached to the Graphical Server Layer comes the Desktop Environment Layer which you will be interacting with. Is here where you choose the system UI on options like XFCE, KDE, Gnome, Cinnamon, and others. Every Desktop Environment has its owns built-in applications like a default browser, file manager, tools, and UI.


Finally the last layer, the Applications Layer!
This layer is easy to understand, is where our applications are! Yes, it is but has more than just some apps like Google Chrome or your favorite code editor. All of our development libs are here too, things like gitcurlbash, any of your programming languages interpreters or compilers.
Nowadays many Linux Distros already count with a Software Center where you can simply open the application, search for some term you want, and install an application with a single click on the download button. Linux is being easier and more friendly to new users through time.

Why use Linux

Why bother me on learning to use a new OS from scratch when we have some most friendly OS to use like Windows and macOS? I’ll try to convince you talking about some experiences that I had in my career.
To start, Linux is free, you don’t have to pay for it. You don’t have to put your privacy and security on the side trying to use an unlocked Windows version. Now imagine that you have a company and each of your employees needs to be a paid version of Windows? How much this will be leaving from the company account?
The thing that always me like Linux most…
Linux is like ice cream! You can choose whatever flavor you like most. There is a flavor for every taste.
You can choose to use the most famous Linux distro in the world, you better choose Ubuntu. But if you like a more friendly and beautiful UI’s yet, you can choose distros like Deepin or ElementaryOS. Or if you just want a stable system you probably will choose Debian. You can want to feel like a hacker using Kali Linuxbut don’t fool yourself, a distro does not make you a hacker. You maybe want to try something new and give it a chance to MX Linux which is on the most downloaded Linux distros lately. You can check the list here on DistroWatch
My favorite flavor always was Mint, in that case, Linux Mint called my attention and became my favorite Linux distribution.
The second point: Are you really satisfied with your operating system? Can you trust on it without any anti-virus paid software? Is it starting to be slowing down with the time you use? Is your OS crashing without any reason?
Since I’ve started to use Linux as my preferred OS, I’d never had a headache with this kind of problem anymore. Linux was evolved to be the most secure OS on the planet and you don’t have to pay for using that.
And when you want to become a programmer, it makes your life easier. All the extensions and libs you need already been compiled into an apt package and you just need to run a simple command to run your application without having to create some workaround.

Advice for beginners

I see many of my friends fearing Linux and I don’t know if they are afraid of putting fire on their computer or if like Linux will destroy their entire network. Calm down, Linux is not your enemy. If you are not confident with adopting any of the thousands of Linux distributions as your primary operating system, you can try it as Virtual Machine. Any change on this VM will be scoped to its hard drive, which is a single file on your computer. It means you don’t have to be afraid to do whatever you want with it. Try to have fun, customize the panels, run some commands directly from the Terminal, even like mkdirlscd. Just try to enjoy the ride

Last thoughts

Linux always will be a good OS choice thinking of security, productivity, and learning more about programming.
My idea here was to share some personal insights and had to admit that was trying to learn more about Linux in the detail, and start to write this down made all the things present here to stay more clear at my head. I hope that all things here were clear to you too.

from Tumblr

Covid-19 outbreak ramps up data focus at LOTI

Covid-19 outbreak ramps up data focus at LOTI:

The boroughs working in local government coalition LOTI are focusing on joining up and acting on data-driven insights to help communities through the crisis and beyond

The London Office of Technology and Innovation (LOTI) has been ramping up focus on projects relating to data in its first year of activity as local authorities seek ways to use technology to respond to the challenges communities are facing in the Covid-19 crisis.
under the leadership of former Nesta director Eddie Copeland, LOTI has been working with a multi-disciplinary team from its membership of 16 London boroughs, The Greater London Authority and London Councils on projects relating to digital, data and innovation.
The overarching theme is that many of the issues local authorities face, such as needing a generation of a skilled workforce to operate in the digital economy, are best handled through a collaborative approach.
In its first annual report, LOTI management noted that it initially thought the emergence of the new coronavirus outbreak would mean a total shift in its strategy, but the pandemic has shown the organisation’s “objectives and ways of working are more important than ever”, in particular when it comes to data.
“Tackling barriers to using data is vital: without good data we can’t see what’s going on and who needs help. And right since day one, we’ve said that all our work must focus on achieving real-world outcomes that matter to Londoners,” LOTI’s programme manager Onyeka Onyekwelu said in the report’s foreword.
“The outcomes that matter now – supporting residents whose lives have been disrupted and whose needs have grown more acute – could hardly be more serious,” she added.
In its first year of operation, LOTI boroughs worked on initiatives aimed at addressing the barriers that prevent local authorities from joining up, analysing and acting on their collective data to deliver better public services.
LOTI had become used to hosting several in-person events and workshops and had to digitise all of its activities, pivot much of its work planned for the year, and find ways to support councils in their response to the Covid-19 crisis. According to the report, the focus of the projects is on tackling vulnerability and promoting inclusion of citizens in need.
Many of the initiatives relating to the coronavirus-focused work have a significant data component, according to the report. An example includes the creation of a LOTI data analysts network for boroughs to exchange tips on Covid challenges, particular the use of data to identify residents in need.
LOTI also helped boroughs share data with each other on children that were receiving Free School Meals to ensure the ongoing support to vulnerable young people. It also supported boroughs applications to get more timely access to death registrations data.
According to LOTI, most of the focus of work in its second year will be related to understanding the changing nature of residents’ needs and designing better ways to address these requirements.

Fixing the data plumbing

Examples of data-related work prior to the pandemic is focusing on “fixing the plumbing” so that local authorities can sophisticate such initiatives.
An example is a project around information governance led by Camden, where workshops found that barriers involve the late discovery of data protection issues in pan-London data projects, as well as lack of standards in terms of information governance processes and version control issues that lead to multiple rounds of feedback and bureaucracy.
The project led to the launch of a seven-step standardised process for information governance that all boroughs can follow, as well as online tools to streamline processes around information sharing agreements and data privacy impact assessments, which is being co-created with the Greater Manchester Combined Authority, Norfolk County Council, Leeds City Council and CC2i.
Other data-related initiatives LOTI was involved with include work with Brent Council on a series of workshops focused on data and artificial intelligence ethics, as well as support to GLA’s discovery phase of the London Datastore, focusing on the potential of the platform as a technical means fro local authorities to exchange and analyse data.
A LOTI project led by the London borough of Waltham Forest on common challenges councils face when dealing with tech suppliers and originated a series of suggestions of terms and conditions local authorities would like to see in contracts. One of them, according to the report, is that London boroughs would like every future tech tender and contact to grant them full and free access to their system data, preferably by an application programming interface (API).
City Tools, a platform with the aim of helping local authorities get better value from the technologies they use, as well as initiatives focused on using data to improve services, is another first year highlight for LOTI. Further work around City Tools included improvements made in the latter part of 2019 towards the creation of Thirty3, a platform that will show boroughs’ technology tender opportunities.

from Tumblr

The future of VPNs in a post-pandemic world

The future of VPNs in a post-pandemic world:

Pre-pandemic, many experts touted VPN’s demise. During the pandemic, VPNs became lifelines for remote workers to do their jobs. Here’s what the future may hold for VPNs.

The importance of VPNs changed significantly in early 2020, as the coronavirus pandemic caused massive digital transformation for many businesses and office workers. VPN trends that started prior to the pandemic were accelerated within days.
What does the future of VPNs look like from the middle of the pandemic?

The past and future of VPN connectivity

The migration of office workers to a work-from-home environment created a new dilemma: How should organizations support workers who may use computers and mobile devices from home to access corporate resources?
The traditional VPN uses a fat client model to build a secure tunnel from the client device to the corporate network. All network communications use this tunnel. However, this model comes at a cost: Access to public cloud resources must transit the VPN tunnel to the corporate site, which then forwards access back out to the internet-based cloud provider. This is known as hairpinning.
For the future of VPNs, end systems’ increasing power will facilitate the migration of more software-based VPN technology into endpoints. VPN technologies will evolve to take advantage of local process capabilities, which make VPNs easier for users and network administrators alike. Network admins will control VPN administration through central systems.
Some predictions for the future of VPNs suggest hardware isn’t necessary in a software world. Yet, as something must make the physical connections, hardware will still be necessary. More likely, x86 compute systems that perform functions previously done in hardware will replace some dedicated hardware devices – particularly at the network edge, where distributed computational resources are readily available. The network core will continue to require speeds only dedicated hardware can provide for the foreseeable future.
how a VPN works
VPNs enable authorized remote users to securely connect to their organization’s network.
VPNs may also begin to function like software-defined WAN products, where connectivity is independent of the underlying physical network – wired, wireless or cellular – and its addressing. These VPN systems should use multiple paths and transparently switch between them.

The past and future of VPN security

Corporate VPNs provide the following two major functions:
  1. encrypt data streams and secure communications; and
  2. protect the endpoint from unauthorized access as if it were within the corporate boundary.
The straightforward use of encryption technology is to secure communications. Encryption technology is relatively old and is built into modern browsers, which makes the browsers easy to use. Secure Sockets Layer or Transport Layer Security VPNs can provide this functionality.
Modern VPN systems protect endpoints from unauthorized access, as these systems require all network communications to flow over a VPN between endpoints and a corporate VPN concentrator. Other corporate resources, like firewalls, intrusion detection systems and intrusion prevention systems, protect endpoints with content filtering, malware detection and safeguards from known bad actors.
In the future, IT professionals should expect to see more examples of AI and machine learning applied to these security functions to increase their effectiveness without corresponding increases in network or security administrator support.
SDP vs. VPN vs. zero trust
Innovative new technologies, such as software-defined perimeter and zero-trust models, will greatly influence the future of VPNs.
VPN paths become less efficient when an endpoint communicates with internet-based resources, like SaaS systems. The endpoint must first send data to the VPN concentrator, which then forwards the data to the cloud-based SaaS application and, therefore, adds to network latency. In addition, network overhead increases within the VPN because the SaaS application also employs its own encryption.
Split tunneling is a potential solution to this inefficiency, but IT teams must select VPN termination points carefully to avoid a security hole. Integration with smart DNS servers, like Cisco Umbrella, enables split tunneling to specific sites under the control of network or security administrators.
An even better security stance relies on a zero-trust model, which assumes endpoints are compromised, regardless of their location. Forrester Research introduced zero trust in 2010, and it has become the new standard to which networks should conform. Zero-trust security components include allowlisting and microsegmentation. The future of VPNs includes automated methods to create and maintain these security functions.
IT professionals can expect the future of VPN technology to provide an increase in security while reducing the effort needed to implement and maintain that security.

from Tumblr

How businesses are Incorporating Innovative Technologies to Market in Pandemic

How businesses are Incorporating Innovative Technologies to Market in Pandemic:

In the past few months, the use of innovative technologies is streamlined under the shadow of the Coronavirus pandemic.
Social distancing, remote working, and self-isolation have created hurdles on the way of marketers to reach out to their customers. It has forced marketers to make the use of digital channels that include innovative technologies such as AI and ML, AR and VR, Big-data, digital marketing, and more into their marketing strategy. The pandemic has created a lot of opportunities for businesses to expand their digital footprints as people spend most of their time online. Expert opinions and predictions indicate that these trends will be followed through the crisis and there are all possibilities that these trends will live after the crisis. In this blog, we are going to discuss some of the major innovative technologies being used increasingly in the Coronavirus pandemic.

1. Shifting to online ordering

Online delivery services are emerging as a new norm, people prefer to order food items, groceries, and other products online instead of visiting a physical store. As a result, Restaurants and other bricks and mortar stores are using the potential of web technology to cater to their customer needs by providing a digitally enabled delivery system. For the first time, the delivery orders are more than the supply, delivery based apps like Doordash, Instacart, and Uber Eats are being extensively used. UberEats have found a 30% increase in the customers signing up for services during COVID-19. In Instacart, people ordered around $680 million worth of goods in each week of April which is about a 430% increase from December. According to Statista, 27% of the US respondents stated that they have deliberately purchased hygiene products (e.g. hand sanitizer, toilet paper, etc.) online instead of offline because of coronavirus pandemic. Businesses are rapidly adopting digital delivery systems to make it convenient for customers to shop online.

2. Use of AR and VR to market products

Businesses are utilizing the potential of Augmented Reality (AR) and Virtual Reality (VR) more during the pandemic. Retail, real estate, trade shows/events and are the major areas using AR and VR technology to provide better virtual marketing experience for the customers and driving sales. For real estate, a VR startup Beike has developed a VR platform to enable potential customers to take a virtual 3D tour of the house properties in the market. The platform has 4 million house properties and apartments listed and it has 650 million users. VR-based trade shows are being widely used by hotels, airlines, and the fashion industry, the technology is enabling users to join large conferences virtually and have real-world tradeshow experiences without breaking the lockdown order. It eliminates the trade show costs and time such as traveling time, meal cost, and other expenses associated with traditional trade shows. Other marketing areas are using the AR and VR technology to streamline their marketing as it was before the crisis.

3. Digital marketing

With consumers staying safely in their homes and spending most of their time online, digital marketing platforms are the best tools for brands to reach out to their potential customers during the pandemic. Digital marketing channels such as E-mail automation, SEO, content marketing, and social media marketing were already well underway, but due to pandemic businesses are relying more on digital mode for marketing. With the increased activities on the biggest platforms like Google, Facebook, and more, the opportunities for businesses to reap the benefit is increased.

4. Big-Data, predictive analytics

Predictive analysis and forecasting are being used widely during the COVID-19 pandemic. Big-Data helps businesses to make decisions, there are many companies implementing data science solutions for efficient marketing. The retail industry is embracing data science and predictive analytics solutions to find out hidden revenue opportunities, consumer purchase behavior, competitive prices, and more. These factors are helping businesses to make proper marketing decisions and get better results. Predictive analytics is not only helping in marketing decision making but also in increasing model sustainability by processing a massive amount of data. Data science solutions and predictive analytics are helping businesses to stay ahead of the curve during the COVID-19 pandemic.

5. AI and ML-based Automations

The pandemic has accelerated AI and ML-based marketing automation processes that were already well underway. With the help of machine learning, it becomes possible to make market predictions by processing a big amount of data. Major companies such as Google and Facebook are utilizing the potential of machine learning to improve ad offerings to consumers. Machine learning algorithms are helping advertisers to improve their media buying efficiency while AI and automation help marketers in many different areas. AI helps in segmentation the customer base, selection of the right channel to drive engagement, fraud detection, and security, and more. Several companies are embracing the potential of AI and ML to bring automation in the system and to be able to deal with the current marketing challenges. 


With an increase in the digital shopping habits of consumers, the use of innovative technologies by businesses of all kinds has increased during the pandemic, executing business processes digitally is shaping up to be new normal. A recent survey states most people of all age groups are likely to make online purchases even after the crisis. These trends are expected to live through the pandemic and beyond, thus, for businesses, it is important to develop an infrastructure to maximize digital or web-based interactions with their customers. Many expert opinions suggest that businesses are required to utilize innovative technologies to stay ahead of the curve.

from Tumblr

Is linux good enough for everyday programming?

Is linux good enough for everyday programming?:

Disclaimer: I’m writing about my experience with major OS (Windows 10, macOs High/Sierra, Ubuntu/Manjaro) using a Solid State Drive. It has a huge impact in term of speed and it could be different from your own experience.
Hello there. To begin with, this post isn’t about what’s the best OS for everyday programming, it could depend on the stack used, Misc programs and specially YOU, so i’ll try to describe all the good/bad things that happened during my everyday workflows.
But before that I should let you know my programming stack so you won’t get confused later. I mainly use:
  1. PHP frameworks and CMS
  2. nodejs frameworks for frontend
  3. react native/ionic for mobile dev
  4. Photoshop (with CssHat) for HTML Integration, banner for mobile apps.
  5. ms office due to my current job.[1]

Ubuntu (Unity/Gnome):

By the end of 2015 and after a good run with Windows 7 and using Ubuntu just occasionally in virtual machines I thought I would give it a shot with a daily usage so I installed the 15.10 version. back then i was programming in PHP, Java and C# (because of my Software engineering Studies), php and apache had great performances locally, same for java but used a windows 7 VM for Visual Studio, Ms Office and Adobe Photoshop, because all the alternatives (Darkable/Gimp, Open office) weren’t at the same levels. I tried but the more you use them the more you notice their weak points such as ease of use, backward compatibility.
I had a good (exactly 2 years) run switching between Unity and Gnome DE (I was the n°1 hater for KDE btw), but over time and even with SSD it felt a kinda slow (I was always stuck with 16.04 LTS) and honestly, I wasn’t fan of the Ubuntu’s PPAs either and then I discovered the Hackintosh community.

macOs (10.12/10.14)

So after a hell of an installation process I managed to run macOs Sierra smoothly on a laptop that has hardware near to macbook pro late 2012 (HP elitebook 840 G1). Apps installed with one simple drag n’ drop (applies to android studio too). It run the Android Virtual Device smoother than windows 7 and ubuntu with the same laptop, i was very surprised, the memory management, the apps integration and the overall stability was so great. At that time I finished my studies so no more Java or .Net programming, and the adobe/ms office suite was a strong point compared to Linux in general so every program ran natively without the need of any VM, with our beloved Unix cli.
The only drawback I had with mac, or with hackintosh, is the system updates/upgrades it was so painful to do it breaks your system every time, I was backing up the whole bootable system image whenever I attempted to update. Because the Kexts (Kernel extensions or “drivers”) weren’t always backward compatible.
So in the end i was thinking to go back to linux again but not sure which distribution i will stick with again, I wanted a stable distro that i forgot completely about something called upgrades of “big updates”. In the meantime I give Windows 10 another shot after hearing it got better and better in the last years.
And again, after 2 years with no workflow complaints I backed up my hackintosh installation and installed the last build of windows 10.

Windows 10.

I’ll resume my experience with one line: “not great, not terrible”
Compared, again, to mac os the system was very smooth in every way, snapping windows, switching virtual desktops, programs and files search in the start menu, no problem but! I already missed the unix cli. Yeah I know there’s cmder and other tools. The overall performance was okay but there was some latency when compiling node js apps. My workflow didn’t change. I used Laragon for all my php projects with phpstorm and it was perfect honestly. On the other hand Android Emulator was terrible even with 8gb or ram and ssd, mac os was handling it way better.
In the meantime I played with some linux distros in VMs and made the choice: Manjaro, KDE flavor.


“You said you hated KDE right?” well yes but for a cause, one I didn’t want to bring back the Gnome memories i had with Ubuntu and second, I disliked is because its similarity in UI compared to Windows in general, 10 specially then I found how very customizable was and again i’ll resume it with one line: “everything is a widget”. So in term of UI I made my simple comfortable setup.
Now in term of programs and workflow I still use PhpStorm for my php and nodejs projects, npm and yarn installed globally and surprisingly npm run very fast compared to windows and mac; git already installed, but for my php projects I migrate all of them to docker with docker compose, majority of projects were based on Laravel, Prestashop, WordPress and old native php apps. I managed to dockerize some of them from scratch, some with Laradock.
Java/.Net: RIP.
For mobile development there were some struggles during configuring ionic and react native’s first run but done with them quickly, no problem with android studio but the emulator “again” wasn’t that good as mac os, but not that bad like windows. And I discovered a helpful package that cast my connected android device to my screen and it’s shown as a virtual device but a physical one, called scrcpy from the genymotion team!
And finally these are just some of the benefits why I picked manjaro:
  1. No big breaking updates.
  2. A rolling release distro.
  3. Fast security patches.
  4. The Great Arch User Repository (AUR)
  5. Snap and Flatpak support (but why?)
  6. Very stable.
But still there are some drawback, linux’s ones in general:
  1. Still needing photoshop and lightroom.
  2. Ms Office for work purpose (Managed to use Web version since we have ms365 but still miss Excel for heavy use)


Finally and personally I’ll stick with linux for these main two reasons: native support for docker (future projects could be deployed with it) and the unix environment similarity to production servers (cli, ssh and packages’ configuration).
I understand many of you will disagree for many things said in the post but that’s okay! because, finally, we choose what will help us to give the most of us in terms of productivity.
Thank you all for reading the most boring post ever made on platform! I would gladly hear from you some of your thoughts and experiences as well. Thanks again! [1]
[1]: edit. added used stack and a conclusion.

from Tumblr

Dockerizing your first web app with python and flask

Dockerizing your first web app with python and flask:

Docker Architecture

First, we have to build a docker application with three containers:
  • ElasticSearch image
  • Kibana image
  • Web app image
For ElasticSearch and Kibana, we use pre-built images from DockerHub using 7.8.0 version. Web application uses a redis-alpine image.
Note: The reason why using alpine, is because this alternative is smaller and more resource efficient than traditional GNU/Linux distributions (such as Ubuntu)

Required Files And Configuration

You can get the required materials on GitLab. The image below shows the different files you will get by cloning the repo

Docker-compose.yml structure

docker-compose.yml file shows the structure explained in previous section

version: '3'
    build: .
    - 5000:5000
    - elastic
   image: "redis:alpine"
   container_name: es01
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
        soft: -1
        hard: -1
      - data01:/usr/share/elasticsearch/data
      - 9200:9200
      - elastic
    container_name: kib01
      - 5601:5601
      ELASTICSEARCH_URL: http://es01:9200
      ELASTICSEARCH_HOSTS: http://es01:9200
      - elastic
    driver: local
    driver: local
    driver: local
    driver: bridge

Let’s take a look at the main tags stated at docker-compose.yml
  • Ports: configures docker port with host machine port where docker runs. In this case, 5000, 5601 and 9200.
  • Image: docker image that is downloaded for the required service
  • Networks: this is named as ‘elastic’ in order to connect the three services
  • Environment: configures environment variables essential to operate among services, such as RAM memory parameters. Within Kibana service, it is necessary to define the variables ELASTICSEARCH_URL and ELASTICSEARCH_HOSTS, in order to link it with the Elasticsearch service.

Dockerfile configuration

Dockerfile has the required steps to configure the web application to link with our python script that extract and store the data in an Elasticsearch cluster, that at the same time imports impact_dashboard.ndjson on Kibana for previous visualization.
It runs on alpine distribution to copy the required folders to get the app running. Moreover, and thanks to requirements.txt, you can add all the dependencies that the python script needs.
FROM python:3.7-alpine
RUN apk add --no-cache git
RUN apk add --no-cache tk-dev
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
COPY mapping.json mapping.json
COPY templates templates
COPY impact_dashboard.ndjson impact_dashboard.ndjson
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]

How To Start Containters

In order to start the application (with docker and docker-compose previously installed) open your terminal and execute the following command
$ docker-compose up
This will initiate the different commands stated in dockerfile and download the required docker images. Once that’s finished, you can check that Elasticsearch (localhost:9200) and Kibana (localhost:5601) are succesfully running.
Now, it’s time to go where python Flask application web app is located, using port 5000
Clicking on start button will initialize data extraction from Jitsi git data (check for more details) and will store such data to our ElasticSearch. Finally, it will import impact_dashboard.ndjson to our Kibana, allowing us to interactively play with the data.
Once the process is finished, the browser will show the next message:
Of course, we can see if our Elasticsearch index and Kibana dashboard have successfully being added to our instances:
By default the time filter is set to the last 15 minutes. You can use the Time Picker to change the time filter or select a specific time interval or time range in the histogram at the top of the page.
et voilà now you have a cool dashboard up and running to analyze how a pandemic can impact Jitsi software development activity.

Bonus Point: Uploading A Docker Image To Docker Hub

With a Docker Hub account, you can build an image where dockerfile is located. Simply type:
$ docker build -t dockerhubID/DockerHubreponame:imagetag .
Then, upload the image to your Docker Hub repo (you can create the repo using the Docker Hub UI
$ sudo docker push dockerhubID/DockerHubreponame:imagetag
Once the image is uploaded to Docker Hub, any user can use and run the app with the following docker-compose.yml
version: '3'
    image: daviddp92/jitsi-data-extraction:1.0.0
      - 5000:5000
      - elastic
    container_name: es01
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
        soft: -1
        hard: -1
      - data01:/usr/share/elasticsearch/data
      - 9200:9200
      - elastic
    container_name: kib01
      - 5601:5601
      ELASTICSEARCH_URL: http://es01:9200
      ELASTICSEARCH_HOSTS: http://es01:9200
      - elastic
    driver: local
    driver: local
    driver: local
    driver: bridge

Closing thoughts

We have learned how to dockerize a web app with Python technology and Flask framework using docker-compose. We also saw how to use and run the application using Docker Hub images.

from Tumblr

First steps in WebGL

First steps in WebGL:

What is WebGL?

The literal definition of WebGL is “Web Graphics Library”. However, it is not a 3D library that offers us an easy-to-use API to say: «put a light here, a camera there, draw a character here, etc».
It’s in a low-level that converts vertices into pixels. We can understand WebGL as a rasterization engine. It’s based on OpenGL ES 3.0 graphical API (WebGL 2.0, unlike the old version that is based on ES 2.0).
The existing 3d libraries on the web (like THREE.js or Babylon.js) use WebGL below. They need a way to communicate to the GPU to tell what to draw.
This example could also be directly solved with THREE.js, using the THREE.Triangle. You can see an example here. However, the purpose of this tutorial is to understand how it works underneath, i.e. how these 3d libraries communicate with the GPU via WebGL. We are going to render a triangle without the help of any 3d library.

Creating a WebGL canvas

In order to draw a triangle, we need to define the area where it is going to be rendered via WebGL.
We are going to use the element canvas of HTML5, retrieving the context as webgl2.
import { useRef, useEffect } from 'preact/hooks'

export default function Triangle() {
  const canvas = useRef()

  useEffect(() => {
    const bgColor = [0.47, 0.7, 0.78, 1] // r,g,b,a as 0-1
    const gl = canvas.current.getContext('webgl2') // WebGL 2.0

    gl.clearColor(bgColor) // set canvas background color
    gl.clear(gl.DEPTH_BUFFER_BIT | gl.COLOR_BUFFER_BIT) // clear buffers
    // @todo: Render the triangle...
  }, [])

The clearColor method sets the background color of the canvas using RGBA (with values from 0 to 1).
Furthermore, the clear method clears buffers to preset values. Used constants values are going to depend on your GPU capacity.
Once we have the canvas created, we are ready to render the inside triangle using WebGL… Let’s see how.

Vertex coordinates

First of all, we need to know that all these vectors range from -1 to 1.
Corners of the canvas:
  • (0, 0) – Center
  • (1, 1) – Top right
  • (1, -1) – Bottom right
  • (-1, 1) – Top left
  • (-1, -1) – Bottom left
The triangle we want to draw has these three points:
(-1, -1)(0, 1) and (1, -1). Thus, we are going to store the triangle coordinates into an array:
const coordinates = [-1, -1, 0, 1, 1, -1]

GLSL and shaders

A shader is a type of computer program used in computer graphics to calculate rendering effects with high degree of flexibility. These shaders are coded and run on the GPU, written in OpenGL ES Shading Language (GLSL ES), a language similar to C or C++.
Each WebGL program that we are going to run is composed by two shader functions; the vertex shader and the fragment shader.
Almost all the WebGL API is made to run these two functions (vertex and fragment shaders) in different ways.

Vertex shader

The job of the vertex shader is to compute the positions of the vertices. With this result (gl_Position) the GPU locates points, lines and triangles on the viewport.
To write the triangle, we are going to create this vertex shader:
const vertexShader = `#version 300 es
precision mediump float;
in vec2 position;
void main () {
gl_Position = vec4(position.x, position.y, 0.0, 1.0); // x,y,z,w
We can save it for now in our JavaScript code as a template string.
The first line (#version 300 es) tells the version of GLSL we are using.
The second line (precision mediump float;) determines how much precision the GPU uses to calculate floats. The available options are highpmediump and lowp), however, some systems don’t support highp.
In the third line (in vec2 position;) we define an input variable for the GPU of 2 dimensions (X, Y). Each vector of the triangle is in two dimensions.
The main function is called at program startup after initialization (like in C / C++). The GPU is going to run its content (gl_Position = vec4(position.x, position.y, 0.0, 1.0);) by saving to the gl_Position the position of the current vertex. The first and second argument are x and y from our vec2 position. The third argument is the z axis, in this case is 0.0 because we are creating a geometry in 2D, not 3D. The last argument is w, by default this should be set to 1.0.
The GLSL identifies and uses internally the value of gl_Position.
Once we create the shader, we should compile it:
const vs = gl.createShader(gl.VERTEX_SHADER)

gl.shaderSource(vs, vertexShader)

// Catch some possible errors on vertex shader
if (!gl.getShaderParameter(vs, gl.COMPILE_STATUS)) {

Fragment shader

After the “vertex shader”, the “fragment shader” is executed. The job of this shader is to compute the color of each pixel corresponding to each location.
For the triangle, let’s fill with the same color:
const fragmentShader = `#version 300 es
precision mediump float;
out vec4 color;
void main () {
color = vec4(0.7, 0.89, 0.98, 1.0); // r,g,b,a
const fs = gl.createShader(gl.FRAGMENT_SHADER)

gl.shaderSource(fs, fragmentShader)

// Catch some possible errors on fragment shader
if (!gl.getShaderParameter(fs, gl.COMPILE_STATUS)) {
The syntax is very similar to the previous one, although the vect4 we return here refers to the color of each pixel. Since we want to fill the triangle with rgba(179, 229, 252, 1), we’ll translate it by dividing each RGB number by 255.

Create program from shaders

Once we have the shaders compiled, we need to create the program that will run the GPU, adding both shaders.
const program = gl.createProgram()
gl.attachShader(program, vs) // Attatch vertex shader
gl.attachShader(program, fs) // Attatch fragment shader
gl.linkProgram(program) // Link both shaders together
gl.useProgram(program) // Use the created program

// Catch some possible errors on program
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {

Create buffers

We are going to use a buffer to allocate memory to GPU, and bind this memory to a channel for CPU-GPU communications. We are going to use this channel to send our triangle coordinates to the GPU.
// allowcate memory to gpu
const buffer = gl.createBuffer()

// bind this memory to a channel
gl.bindBuffer(gl.ARRAY_BUFFER, buffer)

// use this channel to send data to the GPU (our triangle coordinates)
  new Float32Array(coordinates),
  // In our case is a static triangle, so it's better to tell
  // how are we going to use the data so the WebGL can optimize
  // certain things.

// desallocate memory after send data to avoid memory leak issues
gl.bindBuffer(gl.ARRAY_BUFFER, null)

Link data from CPU to GPU

In our vertex shader, we defined an input variable named position. However, we haven’t yet specified that this variable should take the value that we are passing through the buffer. We must indicate it in the following way:
const position = gl.getAttribLocation(program, 'position')
gl.bindBuffer(gl.ARRAY_BUFFER, buffer)
  position, // Location of the vertex attribute
  2, // Dimension - 2D
  gl.FLOAT, // Type of data we are going to send to GPU
  gl.FALSE, // If data should be normalized
  0, // Stride
  0 // Offset

Drawing the triangle

Once we have created the program with the shaders for our triangle and created the linked buffer to send data from the CPU to the GPU, we can finally tell the GPU to render the triangle!
  gl.TRIANGLES, // Type of primitive
  0, // Start index in the array of vector points
  3 // Number of indices to be rendered
This method renders primitives from array data. The primitives are points, lines or triangles. Let’s specify gl.TRIANGLES.

All the code together

I’ve uploaded the article code to CodeSandbox in case you want to explore it.


With WebGL it is only possible to draw triangles, lines or points because it only rasterizes, so you can only do what the vectors can do. This means that WebGL is conceptually simple, while the process is quite complex… And gets more and more complex depending on what you want to develop. It’s not the same to rasterize a 2D triangle than a 3D videogame with textures, varyings, transformations…

from Tumblr