A denial of service attack was performed against the Quad9 DNS, which disrupted the service on May 3, 2021 for approximately 90 minutes. The volumetric attack apparently used LDAP protocol reflection amplification. The attack did not reach the servers but caused network congestion.
Salesforce suffered a global service outage on May 11-12, 2021. The failure was due to a DNS change that the engineer had committed at once globally. The normal procedure would have been a gradual change, but for some reason, this person pushed the changes as an emergency change everywhere at the same time. The expert had been working at the company for many years and the used script was years old. More pain was added when revealed that server management had a dependence on the operation of DNS servers and normal server management was lost due to a DNS failure. The service’s status page page also did not work, and the company had to use its documentation pages, which aroused amusement. Word of problems rushed through social media to customers. The management threw this unfortunate engineer at the wolves and openly blamed one man. This led to some backlash in social media. Errors happen and the purpose is to learn from the error, make corrections, and improve performance. #hugops
There is a constant bustle around the public cloud. AWS is so far a pioneer and leader, involved in almost all implementations thanks to its wide range of services and strong telco community. However, other public clouds are involved in many and have hired managers for the telco business. Dish chose Oracle to implement network slicing and related control plane functions in the cloud. Vodafone is developing a data-crushing platform with Google and Telefonica is building edge services with Microsoft. The differences between the clouds will be reflected in how they manage cloud-native network functions. Google and Microsoft have their strengths and are expected to challenge AWS.
5G must be underpinned by a massive construction contract to upgrade the radio network, transmission network and mobile core. In Japan, Rakuten is lost with its plans and the actual number of base stations needed and construction costs have been unexpected. The size of the entire network is indicated by the fact that in Japan Softbank has 230,000 base stations in the 4G network, in small South Korea there are a total of 870,000 4G base stations and 166,000 5G base stations. China is estimated to have built 1.3 million 5G base stations by the end of this year, reaching only moderate coverage.
Silicon Valley’s startup Airwave is trying to solve the difficulty of dense construction. Airwave promises to prepare the base station sites ready for use. It is trying to be the Airbnb of sites, which searches for and prepares potential installation sites by obtaining permits and concluding contracts. Airwave will not provide equipment or build the site. It pays the few hundred renting fees back to the site owner. The operationnal model is largely the same as in the traditional mast and site business. Now we are moving to ever-larger numbers and more special destinations.
The value of 5G private networks is so hot that even component manufacturers want to get dividends and start making their own network devices. Foxconn and Siemens are planning to manufacture their own 5G products. For the time being, spectrum licenses are held by operators, but when spectrum is released or made available to all, the choice of 5G network providers may be different.
Old mobile networks have a long tail. The 3G networks, which have been in use for almost 20 years, are being shut down and historical relics are being revealed. In the U.S., 6 million alarm devices using the 3G network are threatening to fall off the network and have caused a dispute. Replacing devices is, of course, a tough task, but perhaps nowadays it would be natural to give devices a lifecycle upgrade after 10-15 years, rather than assuming that network technology stays the same forever. On the other hand, mobile networks could be backward compatible to some degree so that even older devices would work. There is a shortage of frequencies and they are needed for new services. T-Mobile and Dish continue the media war over how many customers there were on the soon-to-be-closed CDMA network, 900,000 or 4 million.
AT&T ends its adventure in the media world by outsourcing Warnermedia to a separate company with Discovery. AT&T bought TimeWarner five years ago for 85 billion and now got less than half of it back. With 43 billion revenue, it will get a boost to 5G and broadband construction. Earlier, Verizon announced the sale of its Verizon Media unit. Gone are the days when there was talk of triple/quad play and attracting consumers with media content. Doing business with content was harder than imagined. Attempts are still being made in Finland, although changes are already visible. DNA realized to soon give up pay-TV, but what about Elisa Viihde -Viaplay and Telia-Bonnier and the hockey league?
Operators don’t seem to learn but repeat the same mistakes over and over again. The background is the envy for OTT services and Internet companies. Crazy expansions and world adventures have been seen, even though the network has become a critical issue in our own society and daily life during this time. What if the wasted money had been put directly into network development? But somewhere that sales growth has to be sought. Telenor is withdrawing from Myanmar due to difficult conditions. The coup was too much and the 8-year attempt ends with a write-off of 780 million assets.
Access technologies are evolving and media attention is garnering 5G, which isn’t quite there yet. But DSL technology is also evolving. However, in Finland DNA announced that it would terminate the use of the “160-year-old telegraph network”, aka copper network, by 2025. Of course, mobile or fiber is a substituting technology. Nokia and Proximus have demoed symmetrical 25G fiber network in Antwerp, Belgium. Admittedly, it has more applications on the enterprise side and 5G backhaul than on home broadband.
The operators’ content business is waning, but Amazon is adding steam and buying MGM at a high cost. Amazon already has content through Amazon Studios, Prime Video, Audible, and Twitch. The purpose of the purchase is not entirely clear: did Amazon purchase MGM’s valuable catalog or will it invest and develop the studio? Probably both. Maybe we’ll finally see James Bond as a series. Amazon says companies need to identify when their business models no longer work. The economy of size is value when the price of content has risen out of hands. Amazon has also built an advertising or marketing business that is now thought to be as big as AWS.
For the cloud networkers, the good news is that AWS data billing has been removed between VPCs within the same Availability Zone. A more specific example of cloud usage is building network services using AWS’s Lambda functions. Paloalto has published a comprehensive implementation guide for using a firewall in a public cloud. It serves as a good guideline for all firewall deployments in the cloud.
Google has opened its Hamina data center extension for production and has been operating in Finland for 10 years now. According to a local interview, British and Irish workers have brought a nice added color to Hamina’s local pub culture by expanding beer selections.
Liberty Global and Digital Colony are joining forces with a development company that aims to bring 100 new edge data centers to Europe. Edge and low-latency application development has its special principles which include interesting concepts like offline applications, locality first, network optionality, and fencing. Thus, a global network and a small network latency may not be mandatory application requirements. More importantly, the operating models of the edge are still very vague. This is where the huge commercial potential lies.
Cloud networking is its own thing. The level of networking features varies and solutions are affected by varying features, compatibility, licensing, and costs of native and external virtual products. The industry has been dominated by non-network experts. Now the role of the cloud network architect has come up in the debate. Is that needed and what would be the role? In my opinion, the cloud is about the applications and the connections built for them. The network is built on and depending on the services of the cloud platform. The traditional networking and protocols have been left out, although the basic principles are certainly underlying. As the features of the cloud grow, so does the networking maturity. The cloud platform provides an underlay and the user builds the overlay to suit the applications and their own needs. Yes, a network-savvy designer is needed for this, as it is for all infrastructure and service design and building. Amid all the intricacies, the more complex environment you build, the harder it is to live with. I think with the cloud we will repeat the same mistakes that have been made in the past with the physical networks.
Gartner warns of mixing public clouds. Multi-cloud has little to offer if you start picking the best pieces from different platforms. The same could be said for a multivendor network. Mixing vendors and platforms usually produces more harm than good. Every different piece of hardware, software and feature comes at a price. So my own advice is to focus as much as possible on similar equipment, even if it costs a little more and comes with unnecessary capacity. The cost is small compared to what is spent on operational chores with different platforms.
In the SD-WAN/SASE field, middle mile mania continues. Fortinet, Paloalto, Versa and Vmware joins Google’s Network Connectivity Center and Megaport will build its own edge on Fortinet’s SD-WAN. Where are SD-WAN and SASE going? When the crowd returns to the office, SD-WAN will be in use again. SASE can act as an agent in the endpoint device, but there are many agentless devices that are easiest to tunnel to the cloud over the SD-WAN. However, SASE and its connectors are slowly eating the role of SD-WAN. The multi-cloud is beginning to merge with SASE, but this is the weakness in SASE that needs to be developed. This has been struck by companies like Aviatrix, Alkira, and Prosimo. Data and application processing, protection, routing, access control, etc. also need to be developed further. There are differences between manufacturers. Gartner’s recommendation is to choose a manufacturer that allows you to choose the appropriate method and location for inspection, routing, and logging.
According to a Masergy study, in five years, 92% of companies will use SD-WAN. Most companies use a hybrid model that combines their own environment and the public cloud. Private connection, such as MPLS, is still the prevailing practice for better performance and security. Most use a service provider and only 23% build the SD-WAN themselves. The main reasons for SD-WAN are efficiency, agility, and lower cost. The most important selection criteria for the solution are security and reliability.
The closing of the U.S. East Coast fuel distribution network due to ransomware was a precaution. It is known that the attack was to the IT system, perhaps to the production control system. The Darkside Group itself acknowledged that its job is to make money, not cause harm to society. A ransom of 5 million was paid to them when there was uncertainty about the scale of the attack and the company wanted to restore the service quickly. The recovery took less than a week.
Irish healthcare system underwent a similar incident. For safety’s sake, the entire system had to run down, even though the attack was on data, not operational treatment systems. The line between It and OT is blur when IT is an integral part of all operations. The attacker activities, mental landscape and business models are illuminated by a journalist who has infiltrated the criminal group. The victims have been carefully selected and the preparations have been well done. The attack usually occurs on a Friday night or weekend when staff is away from work.
Norwegian Volue has acted exemplarily openly in dealing with a cyberattack, when so many companies decide to close down and fail to deal with the issue. Volue has shared attack indicators and collaborated with the local CERT and police. It has provided detailed information on the situation on a daily basis and provided contact information for managers for further inquiries. Transparency protects the company, employees and customers, and creates faith in the company’s operations.
Finnish government ICT centre Valtori has confirmed that the Pulse Secure vulnerability has not been exploited or is unlikely to occur. Initially, false alarms due to a software error raised concerns about possible exploitation.
For the first time in history, the number of cloud security breaches exceeded the number of on-premises breaches. Now, 73% of cyber incidents will occur to external cloud resources. The number has bounced upwards in a year. However, there is no evidence that on-premises is safer. The biggest problems with the cloud are stolen credentials, configuration errors, and phishing. On on-premises by far the biggest and growing problem is ransomware. A total of 61% of the problems were related to credentials and only 3% were due to vulnerabilities. Sad but true: 20% of Internet interface vulnerabilities were more than ten years old. The same story is also told by a recent open-source risk analysis report: 91% of the code analyzed contained open source dependencies that had not been updated for two years. 85% of dependencies were more than four years old.
A fragmentation vulnerability has been found in the wifi standard that affects all wifi devices. A design flaw has been in the standard since 1997 and is fortunately difficult to exploit. The manufacturers have been preparing updates for 9 months together with the Wifi Alliance and ICASI. Now they have been published. A brief summary has been compiled on the ICASI website.
The name server Tsuname vulnerability could be used to generate a DDoS attack against another name server. Vmware prompts you to update a serious vulnerability for all Vcenter installations immediately. There is also a temporary workaround.
The Cisco switch may also be hijacked, as happened on the Init7 network. The switch configuration was replaced with propaganda text utilizing the Smart Install feature. The old vulnerability in Cisco Anyconnect and the critical vulnerabilities in SD-WAN vManage and Hyperflex has now been fixed. These are not known to have been exploited, according to Cisco.
In the age of micro-services, the security of containers is often questioned. Traditional methods can be used to attack a Kubernetes cluster: DNS spoofing, overlay networks, or the BGP process. The API is the future of the network and HTTP is the new TCP. The task of the network is to connect and separate, to protect micro-services. This requires visibility provided by a service mesh, such as even Istio or Envoy, on which Vmware’s Tanzu is also based. The service mesh is like a firewall, load balancer and application monitoring tool. However, the GRPC protocol has evolved so that it no longer needs a separate service mesh on its side, but can directly handle load balancing and service discovery. With gRPC, services can be extended directly to the control plane without a proxy.
The Finnish Defense Forces has published the first report on military intelligence and its content is an overview of operations, as might be expected. According to the report, foreign operations have been active in Finland. Local newspaper HS reports a shortage of technical experts that is not mentioned in the report. In the same context, the technical difficulty of monitoring traffic in a dynamically routed network is mentioned. The NSA has been spying on European politicians with the help of the Danish intelligence service in Operation Dunhammer during 2012-2014. Danish cables have been listened to and calls and messages have been collected. The Swedish and Danish defense industries have been spied also. Espionage of Finnish companies is real, according to a report by the Helsinki Chamber of Commerce.
The Finnish Transport and Communications Agency has drafted its own “Huawei framework”. The document only lists the components defined as critical for the communication network.
Technology standardization is sometimes misunderstood. The IETF is not a police or regulator that oversees or creates standards. It has no opinion or purpose on technical matters. The IETF is a volunteer working group that seeks consensus on issues and documents them. The draft can be written by anyone and from anywhere, as the example of Google’s Warren Kumar shows. If the draft has ended up as an RFC, it is already a good common view for further discussion. If further progress is made, the RFC will progress to the BCP (Best Common Practice) level or even the standard (STD). However, executing these different levels of standards is not monitored by anyone particular. The community collectively is only watching after and maybe giving disapproval.
After six years, QUIC has achieved RFC status with a number 9000. The definition is so comprehensive that it is divided into four different RFCs 8999-9002. QUIC does not include HTTP/3, the spec of which will come out later.
EVPN, that much praised standardized protocol, is not fully compatible between different vendors despite the standard. There are a lot of nuances, different implementations, and stages of development in the protocol, so it’s even surprising among manufacturers that something is made to work. EANTC has been conducting multivendor testing for several years. The emphasis is on the service provider world, where multivendor solutions have a place in certain use cases. On the enterprise side, multivendor EVPN-VXLAN has no place if you don’t want to make your life difficult and make protocol tuning your hobby. Maybe one day the development will be at the point where the manufacturers are really compatible. EVPN-VXLAN is also behind Cisco ACI as a somewhat unique implementation. A three-part (1, 2, 3) comparison charts ACI’s advantages over third-party EVPN solutions from technology to operation and cost to scalability.
A more in-depth story has been published about Telia Carrier’s new core architecture. It is based on Cisco 8000 series routers with Broadcom Jericho2 chips and Acacia 400G coherent optics. The network has been made simple and cost-effective. Colt is doing the same modernization with a similar configuration.
With the help of the Metamako, Arista is once again bringing the low-latency switching to a new level. Ten years ago it reached the level of 500 ns and it is now mainstream. The new 7130 switch makes an L1 switching, so a delay of 5 ns is achieved at the electrical level. With this method, the packets just can’t be processes on the switch. L2 switching takes place with a delay of less than 100 ns. Instead of an ASIC, the switching is made on an FPGA and the user can use the SwitchApp application to select the appropriate features to use from a few different profiles. Packet timestamping is extremely accurate at the 400 picosecond level.
One has only to wonder about Arista’s commitment to the HFT market, which is not so huge. The myths continue to be busted. Is a cut-through switching needed for anything anymore? The difference between store-and-forward switching once existed, but in practice today, cut-through is of little importance. Especially when it has many limiting factors that are certainly present in almost every network. Cut-through cannot be done when the speed changes or buffering is done at the gate. This means that cut-through will not work if the switch has line cards or more than one ASIC. In addition, there is certainly always a speed conversion between the access port and the uplink. So all that is left is a local port-to-port switching using compatible hardware in some niche use case. In any case, most applications do not see a difference between microseconds.
When traffic bursts, are buffers needed at the switch? Well, not exactly. Outbound traffic is a problem only if multiple ports feed bursty traffic to the same uplink or vice versa from the uplink to the server port. Instead of buffering and delay, it is probably better to drop the traffic in a controlled manner and let the modern TCP stack handle the problem quickly. Network Function Virtualization (NFV) has also been said to require buffering, but the arguments are as weak as in a data center network. The right place to do buffering is a WAN edge router.
It’s good to remember that applications use IP packets, bigger or smaller, more or less, and with different content. Audio or video, control signal, or monitoring information are just packets among others that the switch or router handles forward in the same way. It is time to declare the special networking requirements of the applications dead. Even ordinary and mediocre network infrastructure does its job just fine without any wonderful tuning. Just look that network topology and functions are right and the application stack works properly.
The switches use high-speed CAM (Content-Addressable Memory) to store lookup tables. CAM uses data for retrieval unlike RAM. The search is also parallel and can be done in cycles. TCAM (Ternary CAM) is a type of memory commonly used in network devices that has a third “don’t care” state in addition to zero and one. Therefore, it is well suited for classifying packets when some of the data can be masked into “any” bits. TCAM is usually a fairly small-sized memory, so there is reason to be concerned about filling it up. Memory is allocated according to the used data. Network devices typically have different profiles that attempt to adjust resources, such as table sizes, to suit different use cases.
18 years ago, Alcatel bought Timetra and brought L2VPNs heavily into MPLS. Operators were enthusiastic about Alcatel devices and the SAM service provisioning tool. The MPLS hype turned into realism. Timetra’s Timos operating system later changed its name to SROS. 2015 Nokia acquired Alcatel-Lucent. Last year, Nokia launched a new operating system, SR Linux. It is the last change to Linux after Junos Evolved switched FreeBSD to Linux in 2018. Almost all network operating systems were originally Linux-based or, at the latest, have now moved to Linux, a neutral, open, and extensible platform. Many Network Operating Systems have a container version available, but surprisingly, Cumulus has not. But yes, it did still go inside Docker.
In wifi, many players have moved to a cloud-managed environment and access points have become independent without a centralized controller. The advantage of the controller has been a possibility to tunnel traffic to a central location and to concentrate operations in one location. Controller sizing and decentralization have become a problem. Can tunneling and standalone access points be combined? It depends. Many manufacturers seem to have a gateway feature that allows LAN access point traffic to be collected and forwarded to a central location.
The Telecom Infra Project (TIP) has launched an open Openwifi initiative that aims to bring open options for building wifi from a variety of pieces. In addition to hardware and software, Openwifi includes various features such as meshing, RRM, Passpoint, and Openroaming. There are more than 100 companies in the community. Whitebox vendor Edgecore has its own wifi product line and also participates in Openwifi. Aruba was the first to launch a Wifi6E access point. Wifi6E use cases include dense areas such as airports, stadiums, hospitals, and lecture halls. In Aruba’s strategy, the access point is an essential data collection element that feeds information about users and devices to the edge platform.
AT&T has tested Openroaming in the urban environment of Austin. The purpose is to show that wifi is capable to roam throughout the city automatically and safely. Openroaming is from Cisco, from where it moved last year to the Wireless Broadband Alliance (WBA). Passpoint is driven by the Wifi Alliance and it focuses more on local roaming. Openroaming then is largely built on top of Passpoint and is designed to provide greater mobility.
The current ways of connecting IoT devices are getting a challenger from Wirepass, which is a spinoff of the University of Tampere. The purpose of Wirepass is to launch a new communication protocol next year that will allow IoT devices to communicate with each other in a distributed manner without base stations. Wirepass Private 5G uses the free 1.9GHz frequency and familiar mobile technologies adapted to the DECT-2020 standard. This results in higher speeds than Zigbee can provide, more secure operation, and telco independent network.
The Zigbee Alliance, which has existed for 20 years as a name, is disappearing and is branding its operations under the new Connectivity Standards Alliance (CSA). The business is expanding, but Zigbee remains an important driving force. Project CHIP (Connected Home over IP) is now Matter, which aims to standardize the communication of smart home IoT devices. In June, Amazon will launch its Sidewalk network, which will connect all its Echo, Ring, and Tile devices via Bluetooth and 900 MHz frequency. Sidewalk is an alternative mesh network if wifi is not available. The feature is turned on automatically, but you can also turn it off.
An update has been made to the EU’s Galileo satellite navigation system, which now offers centimeter-level positioning.
Interesting psychological research says that under pressure a person thinks too complicatedly. The sprawling of thoughts is inherent and we solve problems by adding rather than removing something, although a simple solution to a problem is often best. In a stressful situation, a person easily ends up with easy and quick ways of thinking, and cannot first think of removing something off as an alternative.
The Go language is gaining popularity in general and also in network programmability alongside Python. Go is the right programming language for making programs. Python is used to process data. In network automation, it is beginning to be seen that the automated configuration generation may not have been the only thing to be solved. Creating a configuration takes a lot of time and effort to gather and verify data. Change is a multi-stage workflow and testing is needed in its various stages. At its best, automation can be a closed-loop workflow with a target state defined. The network is constantly tested and any deficiencies and faults detected are quickly corrected automatically. The same tool may not be suitable for automation and testing, but different programs must be used for different functions.
Automation difficulties are tackled with various tools. Jerikan combines the data source, Jinja2 templates, Gitlab, and Ansible. Merlin compiles network status information through a CLI or API. Pandas is a well-known Python data processing library, but networking people should also consider it because of Excel data, time series, and Batfish. As a reminder, here is a summary of Python tools for networking. Help with automation learning is provided by Packet Coders.
Ansible 18.104.22.168 has been released. I have to say that Ansible is not the most clearest product. Many things raise questions: structure, versioning, dependencies, compatibility, fragmented documentation, purpose, etc. The release notes for the new version 4.0.0 can be found on Github. Ansible also tries to solve the workflow problem and synchronize the network state with collections and resource modules.
Network monitoring should operate more on business levels and produce more sophisticated information about the network and traffic. Cost information can be one upper-level parameter to monitor. It may be necessary to optimize traffic if the transported bytes cost. A little better tools and a combination of different sources are needed to dig up the information. Flow data is one versatile source for traffic tracking. Even Linux switchdevs can generate sflow uniformly from all platforms. Switches with Mellanox Spectrum chips can also be used to collect information about packet drops and their causes using What Just Happened (WJH) feature. When you throw the information into time series database and visualization tool, you will get good near real-time information about the behavior of packets. If you happen to use Cumulus Linux, you can use NetQ for monitoring, which is a handy tool for managing and monitoring your Cumulus network anyway.
On the commercial side, Juniper’s multi-vendor data center network management product Apstra was upgraded to version 4.0. Contrail has been discarded and Apstra is now offered by default for data center fabric management. New features include NSX-T 3.0 integration, Sonic support, and connection templates for connected devices. Apstra already supports Juniper, Cisco, Arista, and Cumulus. You can try Apstra for free by logging into Juniper vlabs.
In Europe, less attention has been paid to the NIS directive, which has been in use for a few years now. It has been intended, like the GDPR, to regulate important IT services and service providers, but the practice has stalled due to confusing practices and interpretations. Now NIS2 is trying to fix the situation by specifying more detailed level who the directive applies to. The list includes e.g. IXPs, CDNs, data centers, and DNS services. The goal is good, to get some discipline into poorly performing providers and services. This also means more bureaucracy and the threat of a fine if things don’t go well.
If you are interested in DNS data, here is a list of services where open zone data has been collected. Disney Streaming’s Hulu tells you how to build a large DNS infrastructure and service. Facebook will present how they run BGP in the data center. Facebook has also automated peering using PeeringDB for authentication and request validation.
Microsoft has opened fault data from its data center switches. The network appears to be reliable, but there are manufacturer-specific differences so that one brand is twice as fault-prone as the other. Microsoft doesn’t tell what devices it uses, but known vendors are at least Arista and Mellanox. Arista is probably doing just fine because Microsoft is such a big customer for Arista. But there were 46800 failures on the 180,000 switches during the three-month period. The overall availability was 98%, but with Sonic operating system, the availability increased to 99%. The failures were short, less than six minutes, or else the device died completely and had to be replaced. 32% were hardware failures, 27% power outages, and 17% software bugs.
Companies and products
Greg Ferro’s Enterprise IT Career Handbook provides guidance on coping in the IT industry. It gives you a good idea of how the industry is changing and how to be involved successfully. The soft side is also a very important part of competence and coping today. With technical skills you can get in, but with soft skills you can go far. Having gone through quite a few interviews in the IT industry itself, it can be said that the the level of interviews is shockingly wide. Often a company tries to give a rosy picture of its own operation. Even in some large companies I could not have a clear picture about work despite long talks and a series of questions. Recruiters and interviewers would really have a lot to improve. Instead of technical details and gimmick questions, an interviewer should ask open-ended questions and have an interactive discussion. And first and foremost, the job description should be clear that the applicant knows where she/he is even applying.
Digitization has received some kind of boost during a pandemic. However, according to Sofigate’s study, digitalisation is more of a strategy-level talk than an act. Companies admit that they lack a clear direction to take the actions forward. The main reasons for poor performance are lack of skills and resources as well as resistance to change. In telework, Finland excels and becomes the number one in the EU with a 25% work-from-home share.
Over the years, know-how has run out of companies as a result of their own operations. Few places have good know-how and innovation left when everything is outsourced and purchased as a service. The experts have been driven out and the company’s role remains purchasing, marketing, housekeeping, contract management, and process rotation. Great, right? In North America, it has been asked who lost the local telecom manufacturing. Lucent and Nortel were driven into the arms of Europeans and Chinese leaving Americans without own telco vendors.
After all, Israel’s security sector is booming and there is no end in sight to success. Last year, 31% of the world’s cyber investment went to Israel. There are 443 active cybersecurity companies and 25% of them have been acquired or merged in the last six years. In March of this year alone, ten companies from Israel had risen to more than a billion valued unicorns.
Again Cisco has made series of acquisitions. Sedona was acquired to strengthen routed optical network management and Kenna Security vulnerability management. Socio Labs complements Webex on online platforms. Cisco has spoken in public about its intentions to bring manufacturing price increases for certain products into customer prices. Arista also has an unprecedented difficulty with manufacturing products. Paloalto instead keeps going with the help of software. Already 40% of firewall sales are software and the share is expected to grow steadily, although hardware is still a significant source of revenue.
Juniper has jumped into the SASE game in its own way. Security Director Cloud is a management portal for both network and security services that are slowly moving toward a complete SASE world. The services can be on your own network, in the cloud, or somewhere in between. The transition is slow and Juniper believes that own network services and hardware will remain for a long time to come. This operational challenge of a multi-service environment on transition is solved with a centralized management tool.
Extreme released the Copilot AIOps product to help network monitoring and troubleshooting. On the side of CloudIQ management, Copilot produces a baseline of the network and then it is able to pick up anomalies and draw conclusions from them. Familiar story from Aruba Central and Mist Marvis, for example. In addition, Extreme released the packet broker switch 9920, which is built on top of the P4 programmable Tofino2 circuit. The switch can be used to monitor and process network traffic efficiently. Extreme talks a lot about wanting to get involved in 5G environments. Let’s see if it works.
Internet Map 2021 depicts the largest Internet services on an ancient map, grouping them into countries and continents. The new updated submarine cable map has been released, with all 464 cables in the world and lots of statistics and information around submarine cables.
The market price of IPv4 addresses has risen sharply over the past year, reaching a new high of $ 36 per address.
Qrator Labs has released statistics on 2021 Q1 DDoS attacks and BGP disruptions. The large number of “disruptive ASs” attracts attention: of the approximately 100,000 registered AS numbers, almost 2% leak false routes each month and nearly 10% participate in BGP hijackings. Disruptions are usually small and local, with only a few major events per month. Internet routing disruptions and insecurity are also reflected in services and users, which often receive less attention behind availability. Routing attacks can reveal, for example, an anonymous user or encrypted traffic. Internet services and routing should therefore be better intertwined, rather than trying to operate as independent layers.
Therefore there is a need for RPKI. Comcast, the largest cable company in the USA, has joined the route validation. The work has been done on a long-term basis since 2014. BGP lies are a new concept to me. On the Internet, the AS path should tell the route of the packets, but the data path does not always match the expected AS path. The AS may intentionally modify the AS path, attract traffic to itself, and then forward the traffic in a different way, or the traffic may inadvertently be redirected out of the wrong place.
In the battle for broadband connections, BT’s Openreach has set the wind in British fiber installations are progressing faster than expected and at a lower cost. The target has been raised to 25 million homes by 2026. In Europe, fiber use varies greatly from country to country. Iceland, Spain Sweden is in the lead and Finland is also reasonably in average level.
Satellite race is a little surprising. There are enough competitors from the Western countries and China, and the confidence is high, in such a limited market and technically demanding and expensive implementation. There have also been more negative reviews of Starlink. The satellite connection has its limitations, there are interruptions and real-time interactive applications do not work. The antenna requires a line of sight to the satellite and the space is starting to be cramped and dangerous. A satellite connection is unequal because it restricts the use of certain applications. The net neutrality debate took place years ago in the context of mobile networks. Back then, the reason was administrative. Now, technical characteristics are emerging and differentiating regions geographically. Of course, satellite is better than nothing.
The study also revealed the ultimate reason why the Americans does not use the Internet. The reason is not the availability of the connection, the price, the device, or security threats. Simply people are not interested in using the internet.
RIPE82 offers a lot of presentations from the internet world. The NFD25 introduced products from Nokia, Aruba, Juniper, Vmware, Intel, Ipinfusion, Pathsolutions, and Arrcus. OARCH35 spoke on the DNS issues, which Geoff Huston wrote a summary of and made his assessment.
The main news of the RSA Conference has been compiled by SDxCentral:
- Can Silicon Security Stop Cyberattacks? Intel Says Yes
- Cisco CEO: Cybercrime Damages Hit $6 Trillion
- McAfee Unites XDR, SASE at RSA Conference
- White House Cyber Chief at RSA: ‘Cost of Insecure Tech Is Staggering’
- Fortinet EDR Gains MDR, Mitre Threat Tags
- Palo Alto Networks Zeroes In On Zero-Trust Security
- AT&T Cybersecurity Releases TDR for Government
- Netflix, VMware Security Leads Talk 3 Lessons Learned
- SolarWinds CEO Says Attackers Gained Entry in January 2019
- CrowdStrike Co-Founder: Ransomware Even Bigger Threat Than Nation-States
It has been 10 years since the hacking of SecurID tokens and the NDAs have expired. Now the participants tell what happened in 2011. What is interesting is not so much the technical implementation of the hacking, but how it was handled and how it affected people. Big things were processed: the twist of secrecy and openness, paranoia and loss of trust, the end of innocence in the security industry, and the formation of the current operational landscape.
The tail end of the transatlantic submarine cable Amitié was left in the winter for installation on the seabed. Now the end of the cable on the French coast has disappeared from its supposed location. Divers have been sent to the ocean to search for the cable tail end.