Items tagged with: Safe
“ Unavaoidably unsafe”
…the vaccine “in the present state of human skill and knowledge cannot be made safe,” ~ US Supreme Court, BRUESEWITZ ET AL. v. WYETH LLC, FKA WYETH, INC., ET AL.
#vaccines #unsafe #Buttigieg #MayorPete #fail #democrats #USSC
HN Discussion: https://news.ycombinator.com/item?id=19686701
Posted by tysone (karma: 10168)
Post stats: Points: 137 - Comments: 65 - 2019-04-17T21:37:48Z
#HackerNews #are #crowd #longer #safe #the #you
HackerNewsBot debug: Calculated post rank: 113 - Loop: 194 - Rank min: 100 - Author rank: 194
COI uses an #email address and any IMAP server as its infrastructure. This means it can already connect 3.8 billion users - anyone with an email address.
Drivers are usually written in C for historical reasons, this can be bad if you want your driver to be safe and secure. We show that it i...
Article word count: 463
HN Discussion: https://news.ycombinator.com/item?id=18788069
Posted by DyslexicAtheist (karma: 9070)
Post stats: Points: 133 - Comments: 40 - 2018-12-30T08:17:31Z
\#HackerNews #and #drivers #high-level #languages #safe #secure #video
Paul Emmerich, Simon Ellmann and Sebastian Voit
Playlists: ʼ35c3ʼ videos starting here / audio / related events
Drivers are usually written in C for historical reasons, this can be bad if you want your driver to be safe and secure. We show that it is possible to write low-level drivers for PCIe devices in modern high-level languages.
We are working on super-fast user space network drivers for the Intel 82599ES (ixgbe) 10 Gbit/s NICs in different high-level languages. Weʼve got fully working implementations in Rust, C#, go, OCaml, Haskell, and Swift. All of them are written from scratch and require no kernel code.
Check out our GitHub page with links to all implementations, performance measurements, and publications for further reading.
Supposedly modern user space drivers (e.g., DPDK or SPDK) are still being written in C in 2018 :(
This comes with all the well-known drawbacks of writing things in C that might be prevented by using safer programming languages.
Also, did you ever see a kernel panic because a driver did something stupid? It doesnʼt have to be that way, drivers should not be able to take down the whole system.
There are three steps to building better drivers:
- Write them in a safer programming language eliminating whole classes of bugs and security problems like bad memory accesses
- Isolating them from the rest of the operating system: user space drivers that drop privileges
- Isolating the hardware using the IOMMU
We show that it is possible to achieve all of these goals for PCIe drivers on Linux by implementing user space network drivers in all of the aforementioned programming languages. Our techniques are transferable to other drivers that would benefit from more modern implementations.
Our drivers in Rust, C#, go, and Swift are completely finished, tuned for performance, evaluated, and benchmarked. And all of them except for Swift are about 80-90% as fast as our user space C driver and 6-10 times faster than the kernel C driver. We also investigate how garbage collectod languages affects the latency of a packet forwarder built on top of our drivers.
We take a brief look at Haskell, Swift, OCaml, and C# in the talk and a deeper dive into Rust and Go.
The talk also features a quick summary from last yearʼs talk about user space driver basics, so no previous knowledge is required.
Another thing to take away from this talk is: writing drivers is neither scary nor hard. You can write one in your favorite programming language, so go ahead and try that :)
HackerNewsBot debug: Calculated post rank: 102 - Loop: 288 - Rank min: 100 - Author rank: 39
Security flaws threaten our privacy and bank accounts. So why aren’t we fixing them?
Article word count: 1035
HN Discussion: https://news.ycombinator.com/item?id=18766769
Posted by sinak (karma: 26882)
Post stats: Points: 122 - Comments: 66 - 2018-12-27T02:23:14Z
\#HackerNews #arent #cellphones #our #safe
Security flaws threaten our privacy and bank accounts. So why aren’t we fixing them?
By Cooper Quintin
Mr. Quintin is a senior staff technologist at the Electronic Frontier Foundation.
America’s cellular network is as vital to society as the highway system and power grids. Vulnerabilities in the mobile phone infrastructure threaten not only personal privacy and security, but also the country’s. According to intelligence reports, spies are eavesdropping on President Trump’s cellphone conversations and using fake cellular towers in Washington to intercept phone calls. Cellular communication infrastructure, the system at the heart of modern communication, commerce and governance, is woefully insecure. And we are doing nothing to fix it.
This should be at the top of our cybersecurity agenda, yet policymakers and industry leaders have been nearly silent on the issue. While government officials are looking the other way, an increasing number of companies are selling products that allow buyers to take advantage of these vulnerabilities.
Spying tools, which are becoming increasingly affordable, include cell-site simulators (commonly known by the brand name Stingray), which trick cellphones into connecting with them without the cellphone owners’ knowledge. Sophisticated programs can exploit vulnerabilities in the backbone of the global telephone system (known as Signaling System 7, or SS7) to track mobile users, intercept calls and text messages, and disrupt mobile communications.
These attacks have real financial consequences. In 2017, for example, criminals took advantage of SS7 weaknesses to carry out financial fraud by redirecting and intercepting text messages containing one-time passwords for bank customers in Germany. The criminals then used the passwords to steal money from the victims’ accounts.
How did we get here, and why is our cellular infrastructure so insecure?
The international mobile communications system is built on top of several layers of technology, parts of which are more than 40 years old. Some of these old technologies are insecure, others have never had a proper audit and many simply haven’t received the attention needed to secure them properly. The protocols that form the underpinnings of the mobile system weren’t built with security in mind.
SS7, invented in 1975, is still the protocol that allows telephone networks all over the world to talk to one another. It was built on the assumption that anyone who can connect to the network is a trusted network operator. When it was created, there were only 10 companies using SS7. Today, there are hundreds of companies all over the world connected to SS7, making it far more likely that credentials to the system will be leaked or sold. Anyone who can connect to the SS7 network can use it to track your location or eavesdrop on your phone calls. A more recent alternative to SS7 called Diameter suffers from many of the same problems.
Another protocol, GSM, invented in 1991, allows your cellphone to communicate with a cell tower to make and receive calls and transmit data. The older generation of GSM, known as 2G, doesn’t verify that the tower that your phone connects to is authentic, making it easy for anyone to use a cell-site simulator and impersonate a cell tower to obtain your location or eavesdrop on your communications.
Larger carriers have already begun dismantling their 2G systems, which is a good start, since later generations of GSM such as 3G, 4G and 5G solve many of its problems. Yet our phones all still support 2G and most have no way to disable it, making them susceptible to attacks. What’s more, research has shown that 3G, 4G, and even 5G have vulnerabilities that may allow new generations of cell-site simulators to continue working.
Nobody could have envisioned how deeply ingrained cellular technology would become in our society, or how easy and lucrative exploiting it would be. Companies from China, Russia, Israel and elsewhere are making cell-site simulators and providing access to the SS7 network at prices affordable even to the smallest criminal organizations. It is increasingly easy to build a cell-site simulator at home, for no more than the cost of a fast-food meal. Spies all over the world — as well as drug cartels — have realized the power of these technologies.
So far, industry and policymakers have largely dragged their feet when it comes to blocking cell-site simulators and SS7 attacks. Senator Ron Wyden, one of the few lawmakers vocal about this issue, sent a letter in August encouraging the Department of Justice to “be forthright with federal courts about the disruptive nature of cell-site simulators.” No response has ever been published.
The lack of action could be because it is a big task — there are hundreds of companies and international bodies involved in the cellular network. The other reason could be that intelligence and law enforcement agencies have a vested interest in exploiting these same vulnerabilities. But law enforcement has other effective tools that are unavailable to criminals and spies. For example, the police can work directly with phone companies, serving warrants and Title III wiretap orders. In the end, eliminating these vulnerabilities is just as valuable for law enforcement as it is for everyone else.
As it stands, there is no government agency that has the power, funding and mission to fix the problems. Large companies such as AT&T, Verizon, Google and Apple have not been public about their efforts, if any exist.
This needs to change. To start, companies need to stop supporting insecure technologies such as 2G, and government needs a mandate to buy devices solely from companies that have disabled 2G. Similarly, companies need to work with cybersecurity experts on a security standard for SS7. Government should buy services only from companies that can demonstrate that their networks meet this standard.
Finally, this problem can’t be solved by domestic regulation alone. The cellular communications system is international, and it will take an international effort to secure it.
We wouldn’t tolerate gaping potholes in our highways or sparking power lines. Securing our mobile infrastructure is just as imperative. Policymakers and industries around the world must work together to achieve this common goal.
Cooper Quintin is a senior staff technologist with the Electronic Frontier Foundation, where he investigates digital privacy and security threats to human-rights defenders, journalists and vulnerable populations.
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.
HackerNewsBot debug: Calculated post rank: 103 - Loop: 129 - Rank min: 100 - Author rank: 206
WebAssembly is all the rage these days. And for good reasons. It’s pretty exciting to see many compiled languages adopting a common intermediate representation, that can eventually be translated to…
Article word count: 9
HN Discussion: https://news.ycombinator.com/item?id=18527535
Posted by paraboul (karma: 643)
Post stats: Points: 123 - Comments: 61 - 2018-11-25T16:43:13Z
\#HackerNews #doesnt #languages #make #safe #unsafe #webassembly
WebAssembly is all the rage these days. And for good reasons. It’s pretty exciting to see many compiled languages adopting a common intermediate representation, that can eventually be translated to native code.
WebAssembly was designed with security in mind, and it is a perfect fit for running untrusted code in a web browser.
However, the security guarantees of WebAssembly also make it attractive to run desktop applications, kernel modules, and server-side code.
The WebAssembly memory model
The memory model plays a major role in these security guarantees. Memory is represented as a single linear block, and reading/writing from/to it can only happen through two opcodes, that will systematically check for out-of-bounds access (minus optimizations that can elide checks without altering the guarantees).
From the host perspective, the advantages are obvious. Untrusted code can’t touch anything outside the memory region dedicated to it.
Even inherently unsafe language such as C, when compiled to WebAssembly, immediately become “safer”, as not matter what pointer arithmetic is being done, the code should not be able to escape the memory sandbox.
Ignoring side-channel attacks for a moment, hosts can thus run multiple untrusted WebAssembly applications simultaneously, without having to worry too much about them interfering with each other.
Applications only need to know about two things: how big the linear memory currently is, and how to ask the host to grow it if required.
Address 0, as seen by a WebAssembly guest application, simply represents the first byte of the linear memory. These addresses are effectively offsets that get added to a base address, kept in a dedicated register. And by design, applications don’t have any ways to change the value of that register.
From the host point of view, this is a huge win. From a guest point of view, things are slightly different.
WebAssembly and heap allocations
Guests are constrained to offsets within the big linear memory region. This is the only way they can access memory.
But applications don’t expect a single big linear memory region. Applications typically perform many dynamic allocations in a wide spectrum of sizes. Individual objects lead to individual allocations.
So, they need a memory allocator, i.e. something that will manage individual allocations within a large, linear segment.
WebAssembly hosts currently don’t provide anything like that. Guests have to come with their own memory allocator.
And this is suboptimal, to say the least. Besides having multiple implementations of the same thing, allocators being part of the client code has some serious drawbacks.
First, hosts have no visibility on how memory is being managed within a guest. Want to diagnose memory leaks? Good luck with that.
Allocators optimized for a specific platform are also very likely to be faster than a WebAssembly version. Given that memory allocations are extremely frequent in many common applications, this can be significant.
But more importantly, modern memory allocators implement significant mitigations against common bugs, that can escalate to vulnerabilities.
Part of what makes a language such as C unsafe is the fact that a memory allocation is represented as a single pointer to the first byte. From here, applications are assumed to stay within the allocated range, but there is no barrier preventing them not to:
char *x = malloc(10);
x = 42; // fine!
Since x is not within the allocated area, what do we have at this address? It can be data from other allocations or internal structures from the memory allocator.
As demonstrated by the Heartbleed vulnerability, not staying within the bounds can have dramatic implications. If out-of-bounds data can be replaced with untrusted input, the control flow can change, eventually leading to arbitrary code execution. If out-of-bounds data can be read, this may lead to sensitive information disclosure.
Modern memory allocators have developed excellent mitigations against this class of bugs. As an example, OpenBSD’s omalloc randomizes allocations, keeps data close to guard pages, and has effective detection of double-free and use-after-free.
Applications have also started to adopt stricter allocation strategies for sensitive data. Libsodium’s guarded heap allocations, and their reimplementation in other languages, have become a natural way to protect secrets from Heartbleed-like vulnerabilities.
These exploit mitigation techniques have also proven to be very effective at finding bugs in popular applications. Bugs that could have been turned into actual exploits without early detection.
A major issue with WebAssembly is that most of these mitigations techniques suddenly become impossible to implement. And the few of them that could be implemented are not, for the sake of keeping execution speed acceptable. Applications designed for WebAssembly tend to ship with very primitive malloc() implementations, that are simple, small and fast, but assume bug-free code.
Another surprising fact about WebAssembly is that, addresses being just offsets, address 0 is a completely valid location, which can be read or written to.
This can be a little bit disturbing, since virtually all operating systems from the past 20 years ensured that accessing NULL would immediately stop the execution flow.
A NULL pointer dereference is a common symptom of a logic error, usually due to initialized pointers or flawed pointer arithmetic. There is no valid reason to intentionally dereference NULL ever, and applications crashing when this happens is invaluable for developers to find and fix the relevant logic flaws.
Not to mention that a solid number of vulnerabilities classified as simple denial-of-service would have been way more critical if the application didn’t crash on accessing NULL.
The program is not in an expected state, and its execution has become unpredictable. Immediately stopping its execution is by all means the best thing to do.
NULL pointer deference suddenly becoming a silent operation in WebAssembly is concerning.
Host vs guest safety
There is some paradox here. We have a fantastic and highly secure execution environment from a host perspective. But from the guest perspective, that very same environment looks like MS-DOS, where memory is a giant playground with no rules.
Developers cannot leverage the tools they are used to in order to ensure that heap allocations are used safely any more. This just doesn’t work in WebAssembly, and the WebAssembly host can’t help either.
As a result, applications specifically written for WebAssembly in unsafe languages are likely to be less reliable than if they had been designed for a native environment.
But more importantly, vulnerabilities that would have been mitigated in a native environment are not mitigated any more. To some extent, this is a significant regression.
Is it an issue for web browsers? Not so much, as they already trust foreign code, and the only secrets they might leak are their own.
Is it an issue for desktop applications and kernel modules? Definitely, if they process untrusted data, which is one of the main justifications for going through a WebAssembly transform.
The idea of running WebAssembly server-side relies on the fact that if WebAssembly is safe to run in a web browser, it should be safe to run on servers as well. However, for the reasons listed above, this is not necessarily the case. While escaping the sandbox itself may be difficult to achieve, application security doesn’t benefit from the mitigations commonly found in traditional environments.
So, what can we do?
With the goal of providing better support for languages other than C, C++ and Rust, allocating and manipulating garbage-collected objects is a feature being actively considered for inclusion to the WebAssembly design.
However, there would be clear benefits in also delegating basic dynamic memory management to the host.
Technically, this can be implemented already. However, for successful adoption, a standard interface has to be defined. Granted, I haven’t closely followed the recent WebAssembly proposals, but I don’t think this has been considered so far.
In the meantime, keep in mind that while WebAssembly is a huge step forward, it is not a silver bullet, and running it in a non-browser environment requires extra security considerations.
HackerNewsBot debug: Calculated post rank: 102 - Loop: 275 - Rank min: 100 - Author rank: 61