Working through the Covid-19 lockdown: What to look for in a datacentre partner

Working through the Covid-19 lockdown: What to look for in a datacentre partner:

More than 100 million people are in strict lockdown across Europe as governments and health systems attempt to battle the spread of the devastating Covid-19 Coronavirus pandemic. And, while normal life has effectively ground to a halt, it’s become increasingly clear that the digital infrastructure that underpins the systems and services we use is vital in holding the world together.

From coping with spikes in internet traffic as the world logs onto Netflix, to keeping a nation of remote workers connected away from the office, few can doubt the importance of the datacentre to our lives right now. So much so, datacentre operators have now been deemed key workers by the UK Government, because of the role they play in delivering critical services, vital communications and much-needed entertainment to the public at-large.

Colocation comes into its own during Covid-19

For individual businesses, the stakes are also high. Companies that have got their datacentre strategy right will now benefit from an intelligent and scalable asset that helps keep their show on the road.
This is where colocation excels. An increasingly important option for organisations wanting to focus on their core business operations and reduce their capital investment in infrastructure construction, many companies are now choosing colocation partners to take full responsibility for their physical environment when they simply cannot – either due to cost, expertise or both.
Organisations tend to stay with their selected datacentre provider for a significant length of time, so making the right decision for today and in the future is critical. Here are some key considerations.

Location, connectivity and reliability

Businesses rightly expect low-latency and reliability from colocation providers, with zero tolerance for downtime, so being connected and always-on is a fundamental requirement.
A major factor in making this happen is location. Though data itself is not physically tangible, the infrastructure and power needed to store and transfer it are. For example, how data is stored and accessed is affected by local infrastructure, power resources, and geographic location. A good choice of location means an optimised infrastructure and application environment, whilst poor location can result in unstable connections and efficiency problems.
When it comes to networking and connectivity, it’s important to ensure your provider can deliver advanced, carrier neutral, networking capabilities in line with the organisation’s needs. For companies operating a hybrid cloud model, connectivity to the right carriers is critical. Companies should be aware that whilst some datacentre providers can build the best high-performance computing platform, without connectivity provisioning on-ramp to other clouds, businesses won’t be able to adopt a hybrid cloud strategy.
In the wake of the Coronavirus, it’s important to remember that working practices, legislation and attitudes to working conditions and or safety can vary significantly from country to country. Similarly, rules regarding remote working, remote access to data, and on-site attendance can vary widely depending on where you’re operating. And when it comes to end users, whose digital usage has increased and changed during the lockdown, their demands also need to be paramount. Low latency, access to good networks and power, and guarantees of 100% uptime have become basic needs. In this respect, location has perhaps never mattered more.

Security and business continuity

Unfortunately, a time of crisis presents an opportunity for some, and there is plenty of evidence that cyber criminals are already capitalising on the fears and vulnerabilities of the oronavirus outbreak.
There has been a significant spike in phishing email scams featuring Covid-19 lures, spoof government tax refunds and numerous fear-mongering messages.
With cyberattacks on the rise, even more so in today’s climate of home working, it’s crucial that a datacentre provider can guarantee they’ll keep your mission critical company hardware safe. By choosing the colocation route, organisations benefit from 24/7 controlled access, battery back-up and diesel-powered generators that start automatically if power were lost – and compliance with the important ISO 27001:2013 certification.
During times of crisis, such as the Covid-19 pandemic, datacentre providers must have stringent Business Continuity Plans in place, which can quickly and effectively be deployed, ensuring that security isn’t compromised, no matter the external factors.
The best datacentre providers have developed specific pandemic preparedness plans. They have adapted processes, implementing changes such as shift segregation with no movement of personnel between shifts, no-contact handover, more automated operations (such as remote / smart hands) and no-touch entry where access is required. They have also identified services for deep cleaning appropriate to data centre environments.
On a broader level, providers are scrutinising their supply chains in order to ensure they’re robust and can deliver, and companies are working together in order to share best practice.

Flexibility is key

If there’s one thing that’s characterised this pandemic, it’s uncertainty. From an inability to model when the virus might peak in each country, to long-term doubt about when the nation will be able to get back to work. Or at least some level of pre-pandemic normality. It’s been difficult for experts to predict what is likely to happen next in these unprecedented times.
For businesses, and their relationships with datacentre providers, this means flexibility is crucial. Even before the pandemic, long-term, rigid, datacentre contracts were no longer palatable for many global cloud and digital organisations, where the fast pace of business and technology often required them to change direction quickly. Right now, the ability to flex and scale as required is increasingly critical.
Indeed, if enterprises and IT agility are held back by antiquated and inflexible datacentre platforms or contracts, they won’t be able to react quickly in line with fast-changing business plans – which is needed more than ever today.
As we move through this period of extreme uncertainty, it’s crucial that we effectively manage the digital infrastructure that is enabling businesses and ultimately the economy, to function. The critical relationship with your datacentre provider can be the difference between keeping your company running, and failing to survive. Consideration given to these important factors when choosing a supplier at the outset, to ensure that your datacentre partner is able to guarantee service levels performance, security, reliability and uptime – even (and especially) through the current crisis, could pay substantial dividends in the long-term.


from Tumblr https://generouspiratequeen.tumblr.com/post/619515920312893440

Wi-Fi vendors pitch people tracking for COVID-19 safety

Wi-Fi vendors pitch people tracking for COVID-19 safety:

Wi-Fi vendors Aruba and Juniper Networks work with partners to offer in-office people tracking to defend against a COVID-19 outbreak. Cisco focuses on managing gatherings in physical spaces.
Leading Wi-Fi vendors have tailored their products to accommodate companies that want to use their wireless networks to lower the chances of a COVID-19 outbreak in their buildings.

Aruba and Juniper Networks have positioned their wireless systems as a means for collecting data that companies could use for contact tracing after an employee is infected with the virus. On the other hand, Cisco is focusing on companies that want to enforce physical distancing requirements in buildings to reduce the chances of the infection spreading.

Juniper said companies could use its Mist Wi-Fi access points to track employees outfitted with badges that emit a continuous Bluetooth signal. Mist’s cloud-based analytics engine would let organizations identify people with whom an infected person had been in close contact. It would also show the places the worker visited in a building, and how long he was there.

Jeff Aaron, vice president of marketing at Mist, said the Juniper cloud would not store data to identify employees. Instead, a company would use a separate product to redirect that information to an on-premises database.

Juniper is working with a couple dozen customers that want to use wearable tags for in-office tracking, Aaron said. Juniper offers the devices through partners HID Global and Kontakt.io.

Products coming soon from Wi-Fi vendors

Aruba currently provides third-party developers with software development kits that they can use to integrate Aruba’s Bluetooth-supported tracking features into products. However, the company is “on the cusp” of delivering technology that would complement software for in-office contact tracing, said Alan Ni, a director within Aruba’s digital workplace unit. Companies developing those products include Aruba partner CX App.
Most customers asking for Wi-Fi-enabled contact tracing are colleges and companies with large offices, Ni said. COVID-19 has forced organizations to consider gathering location data on employees that would have been unthinkable before the pandemic.
In the past, [people tracking] was officially a no-fly zone.
Alan NiDirector, Aruba’s digital workplace unit
“In the past, this was officially a no-fly zone,” Ni said. “We didn’t go there.”
Robert Mesirow, a partner in PwC’s IoT practice, said organizations still shouldn’t go there. He said tracking every employee’s movements is unnecessary. In April, PwC introduced an alternative called Check-In.
The mobile app collects only data that tells employers how long and how often employees were with an infected person, and how close they were to the virus carrier. Gathering more data could threaten employees’ sense of privacy and make it less likely they would reveal being infected to employers.
“You want to try to get as close to 100% [participation] as you possibly can, and to do that, you’ve got to have a trusted system,” Mesirow said. “And to have a trusted system, you probably shouldn’t be tracking.”
Meanwhile, Cisco plans to introduce on Monday features that let companies use its DNA Spaces platform to maintain safety in physical spaces. DNA Spaces comprises analytics, toolkits and an API for third-party software integration. The platform uses a Wi-Fi network to gather and analyze data on people’s movements within a store or a public venue, such as a museum or an airport.
The DNA Spaces upgrade would help organizations track the number of people in closed areas. It would also send notifications when an area exceeds its safe capacity, Cisco said in an email. Customers will also get a historical view of space use for future planning. Cisco declined to provide more details until it launches the product.


from Tumblr https://generouspiratequeen.tumblr.com/post/619515680414400512

How artificial intelligence will change the future of work

How artificial intelligence will change the future of work:

AI and machine learning are already changing the way we work, and the future will likely see some big changes. AI could also create more jobs and help us recruit candidates as long as people are willing to adapt and work smarter.

AI is developing at whirlwind rates. While nobody can say for certain how it will impact our work and personal lives, we can make a good few educated guesses. Also, with COVID-19 limiting human interaction in the built environment, advancements in AI and automation are on course to accelerate (providing funding is available, of course).
The age-old fear among some of the population is that AI will displace workers, leading to high levels of unemployment. A report by management consulting firm McKinsey shows that between 400 million and 800 million individuals across the globe could be “replaced” by automation and need to find new jobs by 2030.
However, AI could also create more jobs, as long as people are willing to adapt and work smarter. Research by PwC suggests that AI will add more to global GDP by 2030 than the combined current output of China and India.
So, in what ways could artificial intelligence change the future of work?

Shared augmented workplaces

The virtual communication technologies being developed currently will dramatically enhance the way we experience remote working. Widespread access to WiFi and portable devices have led to an increase in dispersed teams. Companies are replacing their traditional offices with virtual offices, enabling them to access global talent.
Holographic transportation can imitate the physical face-to-face interactions that add value to our workplace experience; the things we usually miss out on when telecommuting. In place of video conferencing screens, augmented reality allows us to collaborate in real time with our coworkers through 3D holographic images and avatars.
Check out Microsoft’s Spatial app for more insight.
Advancements in telerobotics have given humans the ability to operate machines remotely. This area of technology could also give rise to ubiquitous remote working; when teamed with holographic transportation, it could change how we work forever. Telerobotics is facilitated by broadband communications, sensors, Internet of Things (IoT) technologies. 5G and Mobile Edge Computing (MEC) will accelerate the adoption of telerobotics and teleoperation.

The way we recruit will change

AI and machine learning are already changing the way we recruit employees. Technology enables us to analyse thousands of profiles and compile a list of relevant candidates efficiently. Following the shortlisting process, AI technology can be used to communicate with candidates and keep them engaged at every stage of the recruitment journey.
There are lots of AI recruitment tools out there today that help businesses hire remote workers. Users can assess a candidate’s skillset, get an insight into their personality, and even gauge to some extent whether or not they will “fit” with the culture of the company. Some solutions deliver online assessments to candidates and use AI to grade them. Facial recognition technology is used to detect any cheating.
Once the right candidate has been chosen, AI-enabled chatbots can be used (alongside human intervention) to facilitate the onboarding process, helping new starters understand everything from internal processes to the company culture.
AI also has the potential to minimise bias when it comes to recruitment and performance reviews, as candidates are assessed in a more fact-based way. It can also help HR professionals to pinpoint areas of bias in the company and resolve it efficiently. As a result, AI has the potential to make our “virtual” workplaces more inclusive and diverse.
AI can also be used to upskill new employees and indeed minimise the skills gap. Multinational engineering, industrial, and aerospace conglomerate, Honeywell, has developed a simulator for training purposes. Their solution, which helps reduce training time by 60%, enables the user to simulate tasks through virtual environments which are accessed through the cloud.

We’ll be more efficient

When artificial intelligence teams up with the Internet of Things, trend prediction can be done quickly, making businesses more efficient, sustainable, and effective. In time, it will also change the way companies are run, with humans collaborating with AI brains to solve complex problems. (Yes, there will still be a need for human input.)
As well as trend mapping, AI will make it easier for businesses to accurately identify any challenges. Businesses utilising AI and data (responsibly) could also significantly improve the customer and employee experience. Workers will have more time to focus on creatively fulfilling rather than repetitive tasks that machines will do. As a result, HR teams will be able to focus on more strategic work.
There are tools available that use robotic process automation (RPA) to monitor workflows and make informed/ intelligent suggestions as to how tasks can be managed more effectively. They are able to identify when an individual is struggling with a problem and can provide assistance or point the worker in the right direction for human help.
Today and in the foreseeable future at least, AI in the context of work is all about complimenting and maximising human input as opposed to replacing it. It’s about eliminating the mundane and freeing us up to focus on the creative things only humans can do


from Tumblr https://generouspiratequeen.tumblr.com/post/619515456103104512

Best free resources to learn React in 2020

Best free resources to learn React in 2020:

Over recent years, React has grown to become the most popular and widely-used JavaScript UI library (or framework as some might call it) out there. And with this popularity came a lot of new opportunities for both fresh and mature web developers through tons of new jobs, offers, and other React-connected tasks that are currently flooding the market.
And so, if you’re just getting into this whole web development stuff, and want to start learning React right away, here’s something for you! I’ve compiled a list (so you don’t have to) of, what in my opinion, are the best free and up-to-date resources to learn React in 2020!



Official documentation

How predictable, I know. But the truth is the official React documentation is often the most up-to-date and reliable source for information about everything new and specific inside the world of React.
Aside from the API documentation, the website offers a getting started guide, an in-depth tutorial, and a few additional guides for more advanced features.



Video courses

If you prefer a more visual rather than text format, video courses might be for you. There are a lot of high-quality and even interactive React courses out there that you can use to kick-start your React journey!



The Beginner’s Guide to React

The Beginner’s Guide to React is a complete, full-blown course available on egghead.io. It’s meant to teach you everything you need to know to get your React skills up and running – from the sole purpose of the library to web app deployment.



Learn React

Learn React is another great course. It covers the same broad spectrum of React topics, but even more in-depth. What’s more, the course is available on scrimba.com – a great platform dedicated solely to coding tutorials with a built-in editor, to play with the code right when the video is rolling!



FreeCodeCamp

No resource list would be complete without a mention of FreeCodeCamp – one of the biggest platforms for learning how to code, with thousands of articles, guides, and tutorials on the matter.
FreeCodeCamp’s dedicated React series is full of learning material, code examples, and special challenges meant to test your newly acquired knowledge.



Cheatsheets

Who doesn’t like cheatsheets? They’re portable, easy to grasp, and provide you with ready-to-use code whenever needed. And guess what? – There’s quite a few of them for React, too!



React Cheatsheet



React Patterns



React Cheat Sheet



React Podcast

For more advanced React users, who just want to relax while still learning something new, the React Podcast might be a way to go. While not a learning resource pre-se, the podcast can help you get a better grasp of the React and overall web dev ecosystem, as well as an overview of recommended techniques and practices. Just something to broaden your horizons.



Blogs

Blogs form the backbone of the Web’s entire knowledge base. It’s no surprise that there are quite a few of them focused on React!



Dev.to

Not as much of a blog, rather than a blogging platform. Dev.to is a place with a friendly atmosphere were developers come around to discuss and learn about different topics. Here, the #react tag is the one you’ll be interested in. There’s a ton of articles, tutorials, and discussions there already, with more coming out every day! So, if you haven’t already, I highly recommend you check out and join Dev.to! just follow the right tag. 😉



Overreacted.io

A personal blog by Dan Abramov – one of the core developers behind React and co-author of Redux. You must know that it’s not a place for complete beginners. Here, you can find quite a few in-depth posts about React and its internal structure. If you want to know a little bit more about the framework, this is a great place to go.



React Resources

If you think that everything I’ve just listed is not enough, I think you might want to check React Resources. It’s a website that collects and categorizes a lot of different React resources from around the Web. It all might not be as up-to-date or high-quality as everything listed here, but you can be certain that you’ll find there pretty much everything you need – if you look close enough, that is.



Bottom line

So, here you go! A boatload of free, high-quality resources for learning React. I hope that this list helped you find your personal favorite and most enjoyable way of learning. And maybe you know of some other interesting resources that you’d like to share? If so, the comment section is yours!


from Tumblr https://generouspiratequeen.tumblr.com/post/619514844573499392

Possible ways of Iterating ARRAYS in JavaScript

Possible ways of Iterating ARRAYS in JavaScript:

Arrays are used to solve most of the coding problems. So when starting with this, there raises a question for everyone i.e “What are the possible ways to iterate arrays and opting which would be the best?”. The main aim of this blog is to find the possible ways and which method performs best.



1. for :

The “for loop” is the common way of iterating an array. The syntax of for takes an initialization followed by condition and then by increment/decrement operation. The example code below depicts the usage of the “for”.
If the condition is written as “i<a.length”, the for loop calculates the length of the array for every iteration and increases the time. So, calculate the length priorly and store it in a variable and make use of it everywhere. This improves the performance of the code.



2. forEach :

“forEach()” invokes the callback function, which is being given, for each element of the array. forEach works only for arrays. The example code below depicts the usage of the “forEach”.



3. while :

“while” is an entry-level condition checking control statement. The condition is being provided to the while loop and if the loop accepts that condition, the control enters into it and executes the statements. If the condition becomes false, the control moves out of the loop. The example code below depicts the usage of the “while”.



4.do-while :

The do-while loop performs exit-level condition checking. So this loop executes a block of code at least once even when the condition is false. The example code below depicts the usage of the “do-while”.



5.for…of :

The for…of statement is used to loop over the data structures that are iterable such as Arrays, Strings, Maps etc. It calls a custom iteration hook with instructions to execute on the value of each property of the object. The example code below depicts the usage of “for…of”.



6.for…in :

for…in is mostly used to iterate over the properties of an object. As for..of operates on the data items of the array directly, for…in loops through the indices of the array. So we must log “a[i]”.The for…in iteration happens in an arbitrary order. The example code below depicts the usage of “for…in”.



7.filter :

“filter” takes an array and filters out unwanted elements based on the condition provided. This way helps us avoiding usage of for or forEach along with conditional statements. It is an available method only for array and the first argument of that is callback. After the callback is executed, a new array is returned with the required result. The example code below depicts the usage of “filter”.



8. map :

There will be a condition that raises for us when we are working with arrays demanding a modification of array elements. “map” method helps us achieve that. It is an available method only for array. Similar to “filter”, map executes a callback on each element and returns a new array with the required result. The example code below depicts the usage of “map”.
Now we have seen the possible ways of iterating the arrays and performing operations on the array elements. FEW THINGS TO BE NOTED…
  • It is most commonly suggested that “for…in” not to be used with arrays because we can’t guarantee that the iteration happens in sequence.
  • Make better use of ES6 functions map and filter as they make our work more simpler.
  • “map” creates a new array by transforming every element in an array individually. “filter” creates a new array by removing elements that don’t satisfy the condition.
  • The callback function for the “map” function must have a “return” statement. However the single liner arrow functions use the implicit return but when using {}, “map” assumes it as a body and demands for a return statement.
  • When an explicit return is not given, “map” returns undefined but “filter” returns an empty array.
The performance of for…of loop is great compared to for...in and forEach. If it is a casual iteration, it is mostly suggested to go for “for”.
Make use of the above-mentioned methods depending on the situation. I hope this blog helps you better understand the ways of iterating arrays in JavaScript.


from Tumblr https://generouspiratequeen.tumblr.com/post/619514760350908416

Array’s Avengers

Array’s Avengers: Arrays
A special variable used to store multiple variables.
Example

//basic example of an array
var avengers = ["Iron man", "Captain America", "Thor"];

You can also store variables having different data types in an array of javascript.
Ok! So let’s start with the four avengers of arrays which are:

We know these are different but we don’t care how?.
Let’s learn about these functions in detail.
Starting with…

1.) forEach()

forEach() works just like well known for loop which allows you to perform some action on all the elements one by one.
Syntax
array.forEach(callback(currValue , index , array), thisArg)

  • callback(mandatory) : The function which will be executed for each element. And it accepts three arguments which are:
    • currValue(optional) : aka currentValue, value on which the function is being processed.
    • index(optional) : the index of the current Value in array.
    • array(optional) : array for which forEach function is called.
  • thisArg(optional) : value of the context(this) while executing the callback function.

The ‘currValue’, ‘index’ and ‘array’ are optional. But, if you don’t need any of these, you must be executing piece of code array.length (returns an integer equal to length of array) times.

function Calculator() {
  this.count = 0;
}
//adding 'addition' function to 'Calculator' which will take array as a parameter.
Calculator.prototype.addition= function(array) {
  /*for 'addition', 'this' will be same as of Calculator's 'this' and
'sum' will be another attribute just like 'count'.*/
  this.sum = 0;
  array.forEach((currentValue, index, array) => {
    /* array: array which will be passed while calling the Calculator.addition function.
index: will be index of currentValue in array */
    this.sum += currentValue;    //adding currentValue to sum
    ++this.count;
  }, this);//passing 'this', to pass the same context as Calculator to access sum and count of 'Calculator' 
}

const obj = new Calculator();
//taking a random array as an example
obj.addition([8, 12, 5]);
console.log("Count: ", obj.count);//Count: 3
console.log("Sum: ", obj.sum);//Sum: 25

In the above-mentioned example, we are calculating sum of all the elements of the array and finding the count of elements using forEach().
**you can skip the optional fields(which are index, array, currentValue and thisArg) if you do not want to use them.

2.) filter()

Unlike forEach() (just iterates over the array), filter() allows the filtering of an array based on the return type of the callback given to it. The filter() method creates an array filled with all array elements that pass a test implemented by the provided function(callback).
Yes, you are right! the filter() takes a callback as an argument whose return value decides the output.
Syntax
var result_array = array.filter(callback(currValue, index, array), thisArg)

  • callback(mandatory) : The function which will be executed for each element whose returned value will decide the output(if it returns true filter() will add the currValue to the filtered array else it will skip currValue). And it accepts three arguments which are:
    • currValue(optional) : aka currentValue, value on which the function is being processed.
    • index(optional) : the index of the current Value in array.
    • array(optional) : array for which filter() is called.
  • thisArg(optional) : value of the context(this) while executing the callback function.

Example

function Movies(){
 this.moviesCount = 0; 
 this.watchedMovies = null;
}
Movies.prototype.getWatchedMovies = function(array, watched_topic){
 this.watchedMovies = array.filter((value, index, array)=>{
  /* array: An array which will be passed while calling the Movies.getWatchedMovies function.
index: will be index of currentValue in array */
  if(value.indexOf(watched_topic) !== -1){
    ++this.moviesCount;//incrementing count when its learned
    return true; //returning true to include learned value in filtered array
  } else {
   return false;//returning false, to not include these value in filtered array
  }
 }, this);//passing 'this', to pass the same context as Movies to access moviesCount of 'Movies' 
}
let obj = new Movies();
let movies_array = ["Captain America: The First Avenger", "Captain America: Civil War", "Iron Man", "Iron Man 2"]
obj.getWatchedMovies(movies_array, "Iron Man");
console.log("Watched movies: ",obj.watchedMovies);//Watched movies: array(2) ["Iron Man", "Iron Man 2"];
console.log("Count: ", obj.moviesCount);//Count: 2

In the above-mentioned example, we filtered the movie’s array using the ‘watched_topic’. If we check our array(on which we applied filter) after filtering, it will not change. That means, the filter does not change or update the existing array, it gives the new filtered array every time.
Difference between forEach() and filter() is that forEach() iterates the array and executes the callback but filter executes the callback and check its return value and on basis of that return value it decided what should be put inside the filtered array (when the return value is ‘true’, then it adds the currValue to a final array and in case it gets ‘false’ filter ignores that currValue).

3.) map()

Like forEach() and filter(), map() takes a callback function and executes that callback for each element of the array.
map() returns a new array populated with the result of calling the callback on every element.
Syntax
var result_array = array.map(callback( currValue, index, array) {
// return element for result_array
}, thisArg)

  • callback(mandatory) : The function which will be executed for each element whose returned value will be added in the resulting array. And it accepts three arguments which are:
    • currValue(optional) : value on which the function is being processed.
    • index(optional) : the index of the current Value in array.
    • array(optional) : array for which map() is called.
  • thisArg(optional) : value of the context(this) while executing the callback function.

Example

var getMoviesStatus = function( movies_array, watched_topic){
/*in this example, I don't want index , movies_array and 'this' argument inside the callback given to map(). Hence, skipping them.*/
 var moviesStatus = movies_array.map((currentValue)=>{
  if(currentValue.indexOf(watched_topic) !== -1){
   return {currentValue: "watched"};//returning 'watched' status when that movie is watched
  } else {
   return {currentValue: "pending"};//returning 'pending' status
  }
 })
 //returning the map() result, aka moviesStatus
 return moviesStatus;
}

let movies_array = ["Captain America: The First Avenger", "Captain America: Civil War", "Iron Man", "Iron Man 2"];
console.log(getMoviesStatus( movies_array, "Iron Man"));
//[{"Captain America: The First Avenger": "pending"}, {"Captain America: Civil War": "pending"}, {"Iron Man": "watched"}, {"Iron Man 2": "watched"}];

In the above example, we enhanced our previous example in which we were filtering the movies array using ‘watched_topic’. But now, we are returning an array of objects having movies and their status.
Our callback is returning an object during its execution for each element having currentValue (which will be movie name in our case) and it’s status. map() will take those objects and populate them in an array and will return that.
Unlike filter(), map() populates the values returned by the callback provided to it upon completion

4.) reduce()

Last but not least.
reduce() also takes the callback and executes that callback for all the elements of the array but unlike filter() and map(), it does not returns an array. It takes the reducer function (your callback), and executes it for each element and reduce the array to the single value.
Syntax
var result = array.reduce(callback( accumulator, currValue, index, array ), initialValue)

  • callback(mandatory) : The function which will be executed for each element (except for the first element, when initialValue is not provided). And it accepts following arguments which are:
    • accumulator(optional) : The accumulator accumulates the return value of callback. It is the value returned by the callback during its execution for the last iteration. For the first iteration, its value will be equal to initialValue if initialValue is provided else it will be initiated with the first element of the array for which reduce() is called.
    • currValue(optional) : value on which the function is being processed.
    • index(optional) : the index of the current Value in array. reduce() starts iteration from index = 0, when initialValue is provided. Otherwise, it starts with index = 1.
    • array(optional) : array for which reduce() is called.
  • initialValue(optional) : if initialValue is provided, the first iteration will start from index = 0 and accumulator’s value(for first iteration) will be equal to initialValue. Otherwise, the first iteration will start from index = 1, and the accumulator’s value(for the first iteration) will be equal to array[0]. See the example for better understanding. If the array is empty and no initialValue is provided, TypeError will be thrown. Example
//this function will calculate sum
var getSum = function (array, initialValue){
    ///callback will be passed to the reduce() 
    let callback = function(accumulator, currValue){
        return accumulator+currValue;
    }
    if(initialValue != undefined){
        //when initial value is provided passing it to the reduce
        return array.reduce(callback, initialValue);
    } else {
        return array.reduce(callback);
    }
//You can skip the if-else case by giving 0 as a default value to initialValue.
}
//calling the getSum function without initialValue
console.log(getSum([12, 8, 6, 7]));//33
//calling the getSum function with initialValue
console.log(getSum([12, 8, 6, 7], 5));//38

First of all, I apologize to Avenger’s fan for not taking the avenger related example. I found this example more suitable for understanding the concept.
So coming to the point, in the above-mentioned code snippet, we have calculated the sum of the elements of the array.
In case, you provided undefined initialValue to reduce(), it will take that and will try to add elements to that. Which will give NaN at the end

  • At the first calling of the getSum function, we called it without initial value. That means, reduce() with start its iteration with index = 1 and accumulator’s value will be initiated with 12(first element of provided array).
  • Whereas, while calling the getSum next time, we provided initialValue ‘5’. This means, this time reduce() will start its iteration with index = 0, and the accumulator’s value will be initiated with 5(provided initialValue).

So, this was all about the avengers of arrays.
from Tumblr https://generouspiratequeen.tumblr.com/post/619514686739775488

Bringing AWS to App Developers

Bringing AWS to App Developers:

AWS is just too hard to use, and it’s not your fault. Today I’m joining to help AWS build for App Developers, and to grow the Amplify Community with people who Learn AWS in Public.



Muck

When AWS officially relaunched in 2006, Jeff Bezos famously pitched it with eight words: “We Build Muck, So You Don’t Have To”. And a lot of Muck was built. The 2006 launch included 3 services (S3 for distributed storage, SQS for message queues, EC2 for virtual servers). As of Jan 2020, there were 283. Today, one can get decision fatigue just trying to decide which of the 7 ways to do async message processing in AWS to choose.
The sheer number of AWS services is a punchline, but is also testament to principled customer obsession. With rare exceptions, AWS builds things customers ask for, never deprecates them (even the failures), and only lowers prices. Do this for two decades, and multiply by the growth of the Internet, and it’s frankly amazing there aren’t more. But the upshot of this is that everyone understands that they can trust AWS never to “move their cheese”. Brand AWS is therefore more valuable than any service, because it cannot be copied, it has to be earned. Almost to a fault, AWS prioritizes stability of their Infrastructure as a Service, and in exchange, businesses know that they can give it their most critical workloads.
The tradeoff was beginner friendliness. The AWS Console has improved by leaps and bounds over the years, but it is virtually impossible to make it fit the diverse usecases and experience levels of over one million customers. This was especially true for app developers. AWS was a godsend for backend/IT budgets, taking relative cost of infrastructure from 70% to 30% and solving underutilization by providing virtual servers and elastic capacity. But there was no net reduction in complexity for developers working at the application level. We simply swapped one set of hardware based computing primitives for an on-demand, cheaper (in terms of TCO), unfamiliar, proprietary set of software-defined computing primitives.
In the spectrum of IaaS vs PaaS, App developers just want an opinionated platform with good primitives to build on, rather than having to build their own platform from scratch:
That is where Cloud Distros come in.



Cloud Distros Recap

I’ve written before about the concept of Cloud Distros, but I’ll recap the main points here:
  • From inception, AWS was conceived as an “Operating System for the Internet” (an analogy echoed by Dave Cutler and Amitabh Srivasta in creating Azure).
  • Linux operating systems often ship with user friendly customizations, called “distributions” or “distros” for short.
  • In the same way, there proved to be good (but ultimately not huge) demand for “Platforms as a Service” – with 2007’s Heroku as a PaaS for Rails developers, and 2011’s Parse and Firebase as a PaaS for Mobile developers atop AWS and Google respectively.
  • The PaaS idea proved early rather than wrong – the arrival of Kubernetes and AWS Lambda in 2014 presaged the modern crop of cloud startups, from JAMstack CDNs like Netlify and Vercel, to Cloud IDEs like Repl.it and Glitch, to managed clusters like Render and KintoHub, even to moonshot experiments like Darklang. The wild diversity of these approaches to improving App Developer experience, all built atop of AWS/GCP, lead me to christen these “Cloud Distros” rather than the dated PaaS terminology.



Amplify

Amplify is the first truly first-party “Cloud Distro”, if you don’t count Google-acquired Firebase. This does not make it automatically superior. Far from it! AWS has a lot of non-negotiable requirements to get started (from requiring a credit card upfront to requiring IAM setup for a basic demo). And let’s face it, its UI will never win design awards. That just categorically rules it out for many App Devs. In the battle for developer experience, AWS is not the mighty incumbent, it is the underdog.
But Amplify has at least two killer unique attributes that make it compelling to some, and at least worth considering for most:
  • It scales like AWS scales. All Amplify features are built atop existing AWS services like S3, DynamoDB, and Cognito. If you want to eject to underlying services, you can. The same isn’t true of third party Cloud Distros (Begin is a notable exception). This also means you are paying the theoretical low end of costs, since third party Cloud Distros must either charge cost-plus on their users or subsidize with VC money (unsustainable long term). AWS Scale doesn’t just mean raw ability to handle throughput, it also means edge cases, security, compliance, monitoring, and advanced functionality have been fully battle tested by others who came before you.
  • It has a crack team of AWS insiders. I don’t know them well yet, but it stands to reason that working on a Cloud Distro from within offers unfair advantages to working on one from without. (It also offers the standard disadvantages of a bigco vs the agility of a startup) If you were to start a company and needed to hire a platform team, you probably couldn’t afford this team. If you fit Amplify’s target audience, you get this team for free.
Simplification requires opinionation, and on that Amplify makes its biggest bets of all – curating the “best of” other AWS services. Instead of using one of the myriad ways to setup AWS Lambda and configure API Gateway, you can just type amplify add api and the appropriate GraphQL or REST resources are set up for you, with your infrastructure fully described as code. Storage? amplify add storage. Auth? amplify add auth. There’s a half dozen more I haven’t even got to yet. But all these dedicated services coming together means you don’t need to manage servers to do everything you need in an app.
Amplify enables the “fullstack serverless” future. AWS makes the bulk of its money on providing virtual servers today, but from both internal and external metrics, it is clear the future is serverless. A bet on Amplify is a bet on the future of AWS.
Note: there will forever be a place for traditional VPSes and even on-premises data centers – the serverless movement is additive rather than destructive.
For a company famous for having every team operate as separately moving parts, Amplify runs the opposite direction. It normalizes the workflows of its disparate constituents in a single toolchain, from the hosted Amplify Console, to the CLI on your machine, to the Libraries/SDKs that run on your users’ devices. And this works the exact same way whether you are working on an iOS, Android, React Native, or JS (React, Vue, Svelte, etc) Web App.
Lastly, it is just abundantly clear that Amplify represents a different kind of AWS than you or I are used to. Unlike most AWS products, Amplify is fully open source. They write integrations for all popular JS frameworks (React, React Native, Angular, Ionic, and Vue) and Swift for iOS and Java/Kotlin for Android. They do support on GitHub and chat on Discord. They even advertise on podcasts you and I listen to, like ShopTalk Show and Ladybug. In short, they’re meeting us where we are.
This is, as far as I know, unprecedented in AWS’ approach to App Developers. I think it is paying off. Anecdotally, Amplify is growing three times faster than the rest of AWS.
Note: If you’d like to learn more about Amplify, join the free Virtual Amplify Days event from Jun 10-11th to hear customer stories from people who have put every part of Amplify in production. I’ll be right there with you taking this all in!



Personal Note

I am joining AWS Mobile today as a Senior Developer Advocate. AWS Mobile houses Amplify, Amplify Console (One stop CI/CD + CDN + DNS), AWS Device Farm (Run tests on real phones), and AppSync (GraphQL Gateway and Realtime/Offline Syncing), and is closely connected to API Gateway (Public API Endpoints) and Amazon Pinpoint (Analytics & Engagement). AppSync is worth a special mention because it is what first put the idea of joining AWS in my head.
A year ago I wrote Optimistic, Offline-first apps using serverless functions and GraphQL sketching out a set of integrated technologies. They would have the net effect of making apps feel a lot faster and more reliable (because optimistic and offline-first), while making it a lot easier to develop this table-stakes experience (because the GraphQL schema lets us establish an eventually consistent client-server contract).
9 months later, the Amplify DataStore was announced at Re:Invent (which addressed most of the things I wanted). I didn’t get everything right, but it was clear that I was thinking on the same wavelength as someone at AWS (it turned out to be Richard Threlkeld, but clearly he was supported by others). AWS believed in this wacky idea enough to resource its development over 2 years. I don’t think I’ve ever worked at a place that could do something like that.
I spoke to a variety of companies, large and small, to explore what I wanted to do and figure out my market value. (As an aside: It is TRICKY for developer advocates to put themselves on the market while still employed!) But far and away the smoothest process where I was “on the same page” with everyone was the ~1 month I spent interviewing with AWS. It helped a lot that I’d known my hiring manager, Nader for ~2yrs at this point so there really wasn’t a whole lot he didn’t already know about me (a huge benefit of Learning in Public btw) nor I him. The final “super day” on-site was challenging and actually had me worried I failed 1-2 of the interviews. But I was pleasantly surprised to hear that I had received unanimous yeses!
Nader is an industry legend and personal inspiration. When I completed my first solo project at my bootcamp, I made a crappy React Native boilerplate that used the best UI Toolkit I could find, React Native Elements. I didn’t know it was Nader’s. When I applied for my first conference talk, Nader helped review my CFP. When I decided to get better at CSS, Nader encouraged and retweeted me. He is constantly helping out developers, from sharing invaluable advice on being a prosperous consultant, to helping developers find jobs during this crisis, to using his platform to help others get their start. He doesn’t just lift others up, he also puts the “heavy lifting” in “undifferentiated heavy lifting”! I am excited he is leading the team, and nervous how our friendship will change now he is my manager.
With this move, I have just gone from bootcamp grad in 2017 to getting hired at a BigCo L6 level in 3 years. My friends say I don’t need the validation, but I gotta tell you, it does feel nice.
The coronavirus shutdowns happened almost immediately after I left Netlify, which caused complications in my visa situation (I am not American). I was supposed to start as a US Remote employee in April; instead I’m starting in Singapore today. It’s taken a financial toll – I estimate that this coronavirus delay and change in employment situation will cost me about $70k in foregone earnings. This hurts more because I am now the primary earner for my family of 4. I’ve been writing a book to make up some of that; but all things considered I’m glad to still have a stable job again.
I have never considered myself a “big company” guy. I value autonomy and flexibility, doing the right thing over the done thing. But AWS is not a typical BigCo – it famously runs on “two pizza teams” (not literally true – Amplify is more like 20 pizzas – but still, not huge). I’ve quoted Bezos since my second ever meetup talk, and have always admired AWS practices from afar, from the 6-pagers right down to the anecdote told in Steve Yegge’s Platforms Rant. Time to see this modern colossus from the inside.


from Tumblr https://generouspiratequeen.tumblr.com/post/619514595596435456

The 7 best features of React over the last 7 years

The 7 best features of React over the last 7 years:

As React turns 7 these are the features that have improved my developer experience the most over that period of time.



2013 – Initial Release

For new features there needs to be tool. React was officially launched on May 29th, 2013



2014 – Developer Tools

The React Developer tools are a browser extension that enables you to easily debug your react app.



2015 – Stateless components

React 0.14 introduced the ability to create components using a simple arrow function
// A function component using an ES2015 (ES6) arrow function:
var Aquarium = (props) => {
  var fish = getFish(props.species);
  return {fish};
};



2016 – Create React App

Introduced by Dan Abramov in July 2016 Create React App has become a game-changer when it comes to quickly scaffolding a new React App.



2017 – React Fiber

React Fiber was the name given to the complete reworking of the React rendering algorithm that greatly improved the performance of app over the previous version



2018 – Lazy Loading & Suspense

Suspense lets you specify the loading indicator in case some components in the tree below it are not yet ready to render. Today, lazy loading components is the only use case supported by Suspense.



2019 – Hooks

Hooks let you use state and other React features in functional components without writing a class.



2020 – Concurrent mode

Concurrent Mode is the newest feature and is something the community has been excited about for a long time. It is a set of new features that help React apps stay responsive and gracefully adjust to the user’s device capabilities and network speed.


from Tumblr https://generouspiratequeen.tumblr.com/post/619514520052940800

How To Host Your Website For Free on Github Pages

How To Host Your Website For Free on Github Pages:

So you’ve created your cool new website in your code editor, yay! Perhaps it’s your portfolio website that you want potential employers to see…perhaps it’s a fun web app…or perhaps it’s a website about your cat 😺. Either way, you are ready to share it with the world, but how are you gonna do that❓
There are many different websites online that will host your website for a reasonable fee, but did you know that you can actually host your site on Github Pages for free?
Before I begin there are multiple ways of carrying out what I am about tell you, so these are not definitive instructions. But from one newbie to another, I’m going to share what I’ve learned from my own experience, in the most straightforward way I can. However, I am assuming that you have a tiny bit of knowledge about using a terminal and code editor already.
If you haven’t heard of Github before, it is a web-based platform used for version control. If you are new to Github, The Coding Train has a really great series of videos on Youtube explaining what Github is and how it works.



Cool, so what is Github pages?

Github Pages allows you to host your project (aka your website) directly from your Github respository. It means you can make your website live for the world to see!



What you will need next:

  • Your computer’s terminal or a code editor with a terminal (my preferred code editor is VS Code so this is what I will be referring to in this tutorial)
  • A Github account
  • A custom domain (optional)



Create your repository:

  1. Log in to Github and create a new repository. This is where you will upload your project to.
  2. Add a name and description. Your repository name needs to be: username.github.io, where username is whatever your username is on Github.
    You don’t need to initialise a README right now. To keep things simple, we can add that later.
    Once you’ve pressed the green ‘create repository’ button, you’ll notice you are given a screen with some instructions which will make sense in the next section…



Pushing your project to your Git repository:

The following will all be done in the terminal. I tend to use the terminal in VS Code as I will have created my project there:
  1. Make sure you are in your website folder in the terminal. Type the following command:
git init 
This will initialise your project ready for the next step.
  1. Then you need to add the origin (the repository address where you are uploading your website to). You will need to use the link given to you after you submitted your repository, but it looks like this:
  1. Make sure you add all the files that you want to upload. You can add them individually, but to make sure I’ve not missed anything, I tend to just do:
git add --all
  1. Include a commit message. Type whatever you want in the speech marks, but as it’s my first commit, I will do something like:
git commit -m "initial commit"
  1. Finally push your project to your master branch:
git push -u origin master
  1. Now refresh your Github and you will see all the files in your repository! Yay!
  2. Adding a README (optional) – A README enables other people (or you!) to understand what your respository is all about. You don’t have to add one, but if you intend to show your repository to other people, then I would advise you to add one. You can either add your README on Github by clicking that green add a README button, or you could add one in your code editor later and push the changes to your master branch again.



IMPORTANT – If you make any changes on Github directly, your local machine won’t recognise this. One way to rectify this is to pull your respository to your local machine to sync it back up. For this reason, I personally prefer to make all changes in my code editor and then just push the changes to Github.



Publishing on Github Pages

Now because you named your repository username.github.io, it should automatically be published as your main Github pages site. Put https://username.github.io in your browser to check that you can see it!
If for some reason this doesn’t work, click on your respository and then click on settings in the top right corner. If you scroll down, you will see a section for Github pages. Select master branch as the source and you should then see a message:
Your site is published at https://username.github.io
Custom Domains – You will also see a box here for custom domain. If you have already purchased a custom domain, then this is where to put it! There is some help here on Github pages if you are not sure how to configure it. You make also need to check with your domain provider if you are not sure how to set up the correct DNS records.
Enforce HTTPS – I would recommend ticking that box that says Enforce HTTPS as it adds a layer of security and makes your site look more trustworthy, as you get that little padlock in the browser. Without this, people might be wary about visiting your site. Here is an example from my blog:



What if I want to publish more than one Github pages site?

  • Github pages allows you to host one site per GitHub account and organization,
    but unlimited project sites.
  • This means that if you want to host another site or webpage with Gihub Pages, it will be classed as a project. When naming respositories for further projects, you can call them whatever you want, like “blog”, “my-cool-app”, “website-for-my-cat” etc.
  • For example, if my website that I’ve already hosted on Github pages is https://username.github.io, then if I upload a blog that I’ve made and publish that to Github pages, the address for it would be https://username.github.io/blog.
  • Similarly, if I used a custom url which made my Github Pages site https://joebloggs.com, then the address for my blog hosted on Github Pages would be https://joebloggs.com/blog.
  • If you need to host a second site on GitHub Pages with it’s own unique url, you would need to use another Github Account.



I hope this article has helped you to get started using Github Pages! For help with related topics, there are lots of in-depth guides in Github’s help section too


from Tumblr https://generouspiratequeen.tumblr.com/post/619514409795076096

Lego’s comical guide to working from home during coronavirus

Lego’s comical guide to working from home during coronavirus:

Known for its problem-solving approach, Lego has tackled one of the most universal problems facing workers at the moment by creating a tongue-in-cheek guide to working from home. The step-by-step picture guide will teach you exactly how to handle the unique challenges of WFH, and how to “be awesome” at it.
Recreating the on-point vibe of an office (remember those?) within the comfort of your home involves remembering to get dressed, sitting properly at your desk (no dining chairs allowed – see our best office chairs if you’re struggling with this), personalising your desk and then remaining on task at all times. It’s a lot to take on, but following Lego’s guide should see you right. 
If all this Lego talk has whet your appetite, you can head over to our best Lego City sets guide. But first, let’s take a closer look at Lego’s WFH instructions…
Dress appropriately
The first step is to dress appropriately. ‘Appropriate’ in this case, is to take on the strategy of newsreaders everywhere (we assume). Your top half is all that matters here. As long as you don’t stand up during your video call, you can follow the Minifigure’s lead as he models a business-ready top half and more-than-casual-Friday lower body attire.
Follow proper ergonomics
Next, get your build on and recreate the ergonomics of your office set-up. Lego has created a stable structure to rest a laptop on, and we’re sure those Lego books fit together more solidly than our wobbly tower would. And if the tower of books won’t cut it, here’s our pick of the best desks around right now.
Personalise
Now it’s time to soften things up with some personal touches. Lego’s idea of personalising your desk seems to involve adding a looming picture of your boss on your left (it might help to keep you on track), and an interloper visitor on your desk (pretty accurate in our case).
Start working
And finally, to work. Which may be easier said than done, of course. As Lego suggests, that picture of your boss is totally expendable when the inevitable procrastination begins. And how is your cat meant to resist that mouse, anyway?
So, if you’re not totally happy with your WFH set up, perhaps Lego has given you the tools to get it right – heart patterned underwear and all. We certainly send our thanks to Lego for making light of a tricky work/life situation. 
Getting creative with a WFH set up is widespread during lockdown, as we saw with this ultimate work from home set up that divided opinion recently. And it isn’t only work-from-homers that are unleashing design creativity in the time of COVID-19. This optician now has a perspex screen in the shape of a pair of glasses to shield its reception staff, which has taken the internet by storm.


from Tumblr https://generouspiratequeen.tumblr.com/post/619514152143241216