write.with.parts

Reader

Read the latest posts from write.with.parts.

from iNfOsEc aNnOuNcEmEnTs

August 5, 2019 – Emma Sullivan

The Pentagon used its commercial contracting agency to order millions of dollars of computers that are vulnerable to cybersecurity flaws, according to a report from the Defense Department's internal watchdog.

The Department of Defense announced Thursday that it awarded a contract for cybersecurity solutions to Silicon Valley-based NGP VAN.

The two-year contract cost $6.5 million, and it includes a one-year renewal option for $7.5 million. The Pentagon did not immediately respond to a request for comment on whether it was aware of any of NGP VAN's flaws before the contract.

The Risk of Going to The Cloud

NGP VAN warned in May 2017 that its security could be compromised if someone gained unauthorized access to the software's source code.

NGP VAN is a developer of cloud computing services and can help businesses “unlock critical business insights using the full set of business resources, enabling them to operate more successfully,” according to its website.

In a contract document, the Defense Department said the vendor's cybersecurity services, such as “behind-the-scenes monitoring, automated automated testing, secure configuration and incident management tools, secure support services, secure automations, and incident response, will monitor at-risk systems and services for behavior and their associated activities.”

The company sells to the Defense Department's Office of the Assistant Secretary of Defense for Intelligence, so it's not clear how or why the NGP VAN system was chosen over others.

A Shady History

It's not uncommon for the Pentagon to purchase systems with vulnerabilities and it's not uncommon for manufacturers to keep faulty or outdated technology on hand to avoid paying tens of millions to buy a whole new system, the Government Accountability Office previously reported.

There is no reporting requirement for how long the Pentagon knew about any of NGP VAN's flaws and “a company acquiring a system with a known vulnerability should notify and enter into satisfactory mitigation agreements with the supplier,” wrote US Sen. Dick Durbin, a member of the Senate Committee on Veterans' Affairs, in a June letter to Pentagon Deputy Secretary Patrick Shanahan.

“We found that by and large, the Defense Department intends to do everything it can to be as proactive as possible with respect to cyber risks posed by its civilian employees, but it's not clear how far it can go considering the sheer volume of documents it maintains and the cost required to protect these information,” Durbin said.

One of the newly uncovered flaws was a vulnerability that made it possible for anyone to pick up NGP VAN's software from any location and access it remotely. NGP VAN warned in May 2017 that its security could be compromised if someone gained unauthorized access to the software's source code.

It was not clear how the Pentagon knew that threat existed, but it appears that agency officials carried out follow-up checks after the vulnerability was listed on NGP VAN's own security advisory.


This article was automatically generated by Grover, an AI that is used to detect Fake News online, using just the title from an actual news story.

Source article via fedscoop.com: https://www.fedscoop.com/defense-department-known-cyber-vulnerabilities-lenovo-lexmark-gopro/

 
Read more...

from iNfOsEc aNnOuNcEmEnTs

July 12, 2019 – Jane Ryan

The Weather Channel was attacked by an external cyberattack just one day before major weather events were set to hit the US.

The Weather Channel has said it is working with “third-party forensic authorities” on an investigation into the hack, which took place between 6pm and 10pm Eastern Time on Wednesday.

The Weather Channel said on Thursday it had been “targeted with an external cyberattack by an individual or group of people”.

While the breach does not seem to have led to any unexpected disasters, with the Weather Channel not issuing any warnings before severe weather in the US on Wednesday, it did allow an enterprising hacker to change Weather Channel blogs.

Rather than just blogging about the impending weather, the person altered the content to mention the hack on the Weather Channel homepage, posting, “F*** it, I’m going to change the weather on the Weather Channel.”

“We are aware of an alleged incident relating to our website yesterday,” a Weather Channel spokesperson said in a statement on Thursday. “We will provide more information as it becomes available.”

The Weather Channel is owned by Turner Broadcasting.

The nature of the cyberattack and access obtained so far is unclear.

Trolling the weather will eventually get you into lots of trouble.


This article was automatically generated by Grover, an AI that is used to detect Fake News online, using just the title from an actual news story.

Source article lost.

 
Read more...

from iNfOsEc aNnOuNcEmEnTs

June 28, 2019 – Giles Broom

Update

The vulnerability has been patched on all systems containing hardware features such as the ERC20 or the GTML to AIP sockets. Any system containing these technologies should be updated immediately, or work using instructions in the ETH developer’s Bulletproof List.

If you need more information about the scope of the vulnerability, please see the support instructions for the Bulletproof List.

Original Article

A critical vulnerability has been identified in Ethereum’s network architecture, allowing users to conduct transactions anonymously, circumvent traditional security features and potentially empty thousands of wallet accounts without them being known or able to defend themselves.

The bug, called “Dark Wallet” by ETH developer, Ansible, represents a serious escalation of risk for system stability and security in a cryptocurrency based on top-level security and implementation of ESET’s CERT or human-readable encryption on a communications interface. While not a board level security vulnerability, it may be possible to manipulate it from a malicious actor’s point of view or a computer system where it was previously undetected, via a strong performance gain that increases the chances of the targeted cryptocurrency token.

“Dark Wallet” is an intermediate validation layer between ERC20 and GTML to AIP sockets. The ERC20 provides the basic function of transactions in a Bitcoin implementation while the GTML to AIP sockets creates a user interface to the transaction confirmation process.

The dark wallet uses a method called tethered authentication, whereby the transactions are sent to a connection or database and run on it when needed to confirm the transaction, but only at the time of shipping the document. This prevents transaction confirmation from being interrupted. The TOR (transmission mode of message clients) validation layer and other features commonly available on blockchain networks protect transactions in this case.

However, Dark Wallet appears to not run properly on ESET’s distributed hash checker algorithm – a claim ESET and its affiliates cannot independently verify. Furthermore, the ERC20 token can be split into up to three independent versions using an exploit in ERC20-G, with the processing duties assigned to the ERC20-G transactions, effectively silencing the ERC20-G.

This means attackers can alter the ERC20 to AIP sockets, silencing them to disable a valuable layer of security. While no ERC20 developer has found a way to execute Tor moves in stealth with no effect on ERC20 wallets and without the presence of any tamper detection on a computer system, the TOR-based Dark Wallet circumvents these defenses, allowing an attacker to execute any ERC20 transactions in a very basic way. The exploit potentially has the potential to make millions of ERC20 transaction requests in a single transaction, allowing these fake transactions to be executed anonymously in the same way as real transactions, opening a route to millions of ERC20 funds being emptied without people knowing about it.

Dark Wallet also appears to be a fundamental breach of the protocol’s security as it is unable to identify what ERC20 or GTML-equipped devices should be running the method of validation. If ESET is unable to verify this, we cannot verify if Dark Wallet is working in a secure fashion.

As well as being a major risk to Ethereum’s continued viability as a viable cryptocurrency, the dark wallet also has a direct impact on the authentication chain of the ERC20. We will try to get ESET to correct this flaw quickly, but Dark Wallet appears to be contained in code not visible to unmodified systems.

Additional information on the Dark Wallet implementation is available here.


This article was automatically generated by Grover, an AI that is used to detect Fake News online, using just the title from an actual news story.

Source of the title is my own imagination.

 
Read more...

from iNfOsEc aNnOuNcEmEnTs

August 14, 2019 – Allan Alderman

Network certification, the RSA industry standard to authorize its users, requires applications to be able to define, enact, and execute with certain minimum security parameters. The protocol that allows GStreamer developers to define such parameters, MVC 1.1, (participation identification model) was intended to adhere to the necessary security standards, but some developers have been able to circumvent the requirement.

For example, with MVC 1.1, GStreamer developers can specify that the protocol includes 256-bit PII, PL0, and string object numeric value values. Different members of the protocol can also dictate what that IP string represents.

Even the most sophisticated developers were able to bypass the security conventions included in the MVC 1.1 protocol.

From one application to another

Based on one piece of code in MVC 1.1, MVC 0.1, and an unsuccessful authentication attempt, the attacker can impersonate a GStreamer developer and submit a vote, thus giving the attacker the ability to contribute.

In an attempt to thwart the attack, one can either create an MVC 0.1 application (the tool itself is not malicious), add a copyright notification, or use a weak validation number that changes based on the intent of the transaction. For some reason, MVC 0.1 does not mention any of these precautions.

MVC 0.1

A knowledge of MVC 0.1 will not necessarily help to prevent MVC 0.1 from being executed. Fortunately, at this point, it's not clear that anyone can do this in the wild.


This article was automatically generated by Grover, an AI that is used to detect Fake News online, using just the title from an actual news story.

Source of the title is my own imagination.

 
Read more...

from adasauce

I was in the midst of implementing a little POC migration of a small deployment from using OpenVPN to WireGuard for a pre-production network when I ran into an issue with 3 of the network clients being NanoPC-T4 devices. I didn't initially consider that this would be an issue, but FriendlyElec in their infinite wisdom does not provide a linux-headers package for their kernels, and wireguard doesn't provide a wireguard-modules package for ARM. Other distributions such as DietPi still package the upstream kernel directly as well, so also do not provide a headers package.

After banging my head against the wall looking into custom mainline kernels for the NanoPC-T4, I considered that there might be a userspace utility for WireGuard that doesn't require compiling a kernel module or installing wireguard-dkms.

A quick search turned up both wireguard-go and boringtun*. From a cursory glance and a few user stories, the boringtun project seemed to be a slightly more mature and less error prone implementation that has seen deployments on embedded & SBC devices already.

*Editor's Note: Fuck CloudFlare. Get your FUCK CLOUDFLARE sickers today!

Installation & Configuration

I still need the wireguard-tools package in order to leverage the handy-dandy wg-quick util and related systemd integration so the VPN will connect easily on boot.

Generate client side keys for WireGuard, and set up a simple wg0.conf using the existing OpenVPN IP addresses so I don't have to reconfigure security groups.

$ cat /etc/apt/sources.list.d/unstable.list
> deb http://deb.debian.org/debian/ unstable main

$ cat /etc/apt/preferences.d/limit-unstable
> Package: *
> Pin: release a=unstable
> Pin-Priority: 90

$ apt install wireguard-tools

$ wg genkey | tee privatekey | wg pubkey > publickey

$ cat /etc/wireguard/wg0.conf

> Address = 10.8.0.2/24
> PrivateKey = <privatekey> 
> DNS = <your favourite DNS server>
>
> [Peer]
> PublicKey = <server public key> 
> PresharedKey = <server PSK>
> AllowedIPs = 10.8.0.0/24
> Endpoint = <server IP>:51820 
> PersistentKeepalive = 25

Install BoringTun & Rust

Dietpi package repo's Rust was old and decrepit, so let's just install new rust from rustup.sh. It should automagically install everything you need to build and compile your rusty applications (i.e. cargo & rustc) for whatever platform you're running. In this case: arm64/aarch64.

$ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Next. I had to pull down and build my own copy of BoringTun from source to fix a small issue with compilation before building and installing.

$ git clone https://github.com/cloudflare/boringtun.git
$ cd boringtun/
$ vim src/device/tun_linux.rs
# >> https://github.com/cloudflare/boringtun/issues/89/#issuecomment-508962631
cargo install --path .

The fancy new boringtun binary installs to /root/.cargo/bin, which should be on your path if you've sourced $HOME/.cargo/env since installing Rust.

Manual hax

Privilege de-escalation doesn't work properly for boringtun when logged in as root. Passing undocumented environment variables to wg-quick on startup as suggested by using WG_QUICK_USERSPACE_IMPLEMENTATION="boringtun --disable-drop-privileges" wg-quick up /etc/wireguard/wg0.conf was unsuccessful per the project's README. So hacking wg-quick in place was needed in order to accomplish a functional utility.

$ vim $(which wg-quick)
> 
> add_if() {
>         boringtun --disable-drop-privileges "$INTERFACE"
>         # jlocal ret
>         # ...
>

Note: WG_QUICK_USERSPACE_IMPLEMENTATION is purposefully undocumented and added in #b9b78f27399 to help enable userspace implementations of the WireGuard protocol.

It's Tunnel Time

Now that the hacking and slashing is behind us, we can finally bring the WireGuard VPN connection online to the server.

$ wg-quick up /etc/wireguard/wg0.conf
$ wg

interface: wg0
  public key: <redacted>
  private key: (hidden)
  listening port: 57870

peer: <server public key>
  preshared key: (hidden)
  endpoint: <server IP>:51820
  allowed ips: 10.8.0.0/24
  latest handshake: 1 minute, 16 seconds ago
  transfer: 3.79 KiB received, 119.36 KiB sent
  persistent keepalive: every 25 seconds

If you need to tunnel everything over your VPN vs. just the resources on the VPN network, you can change allowed_ips in your configuration to 0.0.0.0 and it will generate a default route and mark all packets to be destined for the WireGuard connection.

Caveats

It seems like when tearing down the connection, wg-quick doesn't actually destroy the BoringTun tunnel and creates a second process, but doesn't actually complain about it. All seems to be working fine, but after 3-4 restarts, the NanoPC's CPU is pinned at 500% usage with all the userspace agents fighting eachother.

Probably something I'll hack on later to clean up, but for now it is working as well as can be expected. Even with using a userspace vs. kernelspace agent, system load is down and overall latency has improved compared to the existing OpenVPN setup.

Notes

For those looking for a great source of information on installing and configuring WireGuard, check out the Arch Linux Wiki page on WireGuard. Top notch as always.

 
Read more...

from adasauce

I manage a number of Ubuntu servers, almost all of which over time have developed DNS resolution issues that traced back to something wrong with systemd-resolve. Systemd-resolve has had a pretty horrible track record for actually working most of the time.

The latest wonky behaviour manifested in DNS resolution completely failing randomly, then would work again, then pop out immediately after, then only try and resolve ipv6 addresses, then none would work at all. After swapping out my nameservers and checking on the stability of my connections to said nameservers it all came back again to systemd-resolve.

Samples of the log output:

systemd-resolved[592]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
systemd-resolved[592]: Using degraded feature set (TCP) for DNS server x.x.x.x
systemd-resolved[592]: Using degraded feature set (UDP) for DNS server x.x.x.x

Then it would die. The only course for recovering connectivity was to kick the service with a systemctl restart systemd-resolved, then it would only work for another few minutes before getting into a bad state again.

I found an issue on launchpad.net #1822416 which seemed to point to the issue I was facing, but it remains open even though there is an upstream systemd fix in place for it on github/systemd.

The final solution?

Kill it with fire and switch to unbound. Unbound has been cropping up more and more in deployments, and it's been relatively painless to run and didn't require any additional configuration out of the box.

$ apt install unbound resolvconf
$ sudo systemctl disable systemd-resolved
$ sudo systemctl stop systemd-resolved
$ sudo systemctl enable unbound-resolvconf
$ sudo systemctl enable unbound
$ sudo systemctl start unbound-resolvconf

We'll see how it goes, but I haven't had any more DNS instability since switching over.

 
Read more...

from adasauce

I recently weathered a bloody battle with grub2 which ended with me pondering: Why don't I just boot this VM via UEFI? (Automating an Arch install to boot from UEFI post coming soon.) To hell with grub, it's finicky configuration, and the massive pain in my side.

Step 1: Install OVMF on the VM host

There are a few different OVMF packages available via extra and AUR. After trying the extra/ovmf package, and it not working immediately, I uninstalled and jumped to AUR with aur/ovmf-aarch64 & aur/ovmf-git.

This led me down an interesting path, as it seemed that I the OVMF_CODE.fd and OVMF_VARS.fd were missing, and just OVMF.fd was compiled in its place. One of the OVMF co-maintainers chimed in on the Debian bug tracker explaining essentially that it's up to distributions of the project to split the files, as including the OVMF.fd leads to confusion. It certainly led to my confusion.

Per the co-maintainer's suggestion, I went back to the distribution's ovmf package, and started debugging again from there.

$ yay -S extra/ovmf

Onto the extra configuration!

Step 2: Configure Libvirt to use OVMF

Libvirt claims that the nvram configuration option is obsolete, and Libvirt will follow the QEMU firmware metadata specification to automatically locate firmware images.

I found this to be hopelessly false. Whether or not the package maintainer is following the expected format or locations, these values do need to be configured to those paths where ovmf installs the files.

/etc/libvirt/qemu.conf

...
nvram = [
   "/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd"
]
...

From there, simply restart the libvirtd service and create the new VM with the appropriate <os> values.

<os>
  <type arch="x86_64" machine="pc-q35-4.0">hvm</type>
  <loader readonly="yes" type="pflash">/usr/share/ovmf/x64/OVMF_CODE.fd</loader>
  <nvram>/var/lib/libvirt/qemu/nvram/archlinux_VARS.fd</nvram>
  <boot dev="hd"/>
</os>

If you are using virt-manager to create your VMs through the wizard, you should be able to now select “UEFI x86_64” in the “Firmware” dropdown when you customize your machine.

 
Read more...

from thurloat's thoughts

Just a quick update to pass along regarding our write.with.parts deployment.

The deployment has been properly Dockerized and the Nginx container is caching assets.

The container build pipeline has worked through enough times that I'm pretty confident to build and release as new features are added upstream. I still have more tweaks I'd like to make to our landing page and general templates, but I'll be holding off until pull request #131 lands so I'm not duplicating any work.

I want to ensure we don't end up in a stuck fork like Moosetodon. Following along any of our writefreely changes with the development of new upstream features must not to tie our hands behind our back like with the massive Mastodon refactors making our changes completely incompatible and needing full re-writes. On the Mastodon side, this was partially my fault for gauging the maturity of the codebase at the time our deeply integrated features were added in. We'll be playing it much safer this time around.

As well, I'm considering adding a status / development blog separate from my own for official with.parts updates for not just WriteFreely, but our other hosted services as well.

All of my initial 10 invitations appear to have been quickly gobbled up, so just let me know if you're interested in creating a writing spot here or ping me on Mastodon for an invite link.

Happy writing!

 
Read more...

from Matt Pickup funhaus

The Meridian Green Blockchain

What does being green mean to us?

1.) Efficiency – Low energy powered hardware, running on a fraction of the energy needs of normal servers

2.) Renewable Energy – All blockchain and application servers operate on green sources of energy

3.) Simpler computations – our network transactions are added by authorized nodes in a Proof of Authority protocol

** System architecture – integrations and storage **

 
Read more...

from thurloat's thoughts

We're up in production mode. There's still some housekeeping tasks to take care of for the server, such as: – better asset caching, – automating the currently manual backups, and – customizing some of the templates to make it feel more like home.

If you want to come hang out and chat about it, you can get a hold of me on matrix: @thurloat:with.parts.

There is an invite floating around on the Fediverse to my followers if you're looking to register an account, or you can reach out to me directly for a fresh invitation.

Look forward to seeing you all out there.

 
Read more...