79 stories
·
0 followers

Using the Page Visibility API

1 Share
This post takes a look at what page visibility is, how you can use the Page Visibility API in your applications, and describes pitfalls to avoid if you build features around this functionality.

Read the whole story
bernhardbock
12 hours ago
reply
Share this story
Delete

A few notes on AWS Nitro Enclaves: Attack surface

1 Share

By Paweł Płatek

In the race to secure cloud applications, AWS Nitro Enclaves have emerged as a powerful tool for isolating sensitive workloads. But with great power comes great responsibility—and potential security pitfalls. As pioneers in confidential computing security, we at Trail of Bits have scrutinized the attack surface of AWS Nitro Enclaves, uncovering potential bugs that could compromise even these hardened environments.

This post distills our hard-earned insights into actionable guidance for developers deploying Nitro Enclaves. After reading, you’ll be equipped to:

  • Identify and mitigate key security risks in your enclave deployment
  • Implement best practices for randomness, side-channel protection, and time management
  • Avoid common pitfalls in virtual socket handling and attestation

We’ll cover a number of topics, including:

Whether you’re new to Nitro Enclaves or looking to harden existing deployments, this guide will help you navigate the unique security landscape of confidential computing on AWS.

A brief threat model

First, a brief threat model. Enclaves can be attacked from the parent Amazon EC2 instance, which is the only component that has direct access to an enclave. In the context of an attack on an enclave, we should assume that the parent instance’s kernel (including its nitro_enclaves drivers) is controlled by the attacker. DoS attacks from the instance are not really a concern, as the parent can always shut down its enclaves.

If the EC2 instance forwards user traffic from the internet, then attacks on its enclaves could come from that direction and could involve all the usual attack vectors (business-logic, memory corruption, cryptographic, etc.). And in the other direction, users could be targeted by malicious EC2 instances with impersonation attacks.

In terms of trust zones, an enclave should be treated as a single trust zone. Enclaves run normal Linux and can theoretically use its access control features to “drive lines” within themselves. But that would be pointless—adversarial access (e.g., via a supply-chain attack) to anything inside the enclave would diminish the benefits of its strong isolation and of attestation. Therefore, compromise of a single enclave component should be treated as a total enclave compromise.

Finally, the hypervisor is trusted—we must assume it behaves correctly and not maliciously.

Figure 1: A simplified model of the AWS Nitro Enclaves system

Vsocks

The main entrypoint to an enclave is the local virtual socket (vsock). Only the parent EC2 instance can use the socket. Vsocks are managed by the hypervisor—the hypervisor provides the parent EC2 instance’s and the enclave’s kernels with /dev/vsock device nodes.

Vsocks are identified by a context identifier (CID) and port. Every enclave must use a unique CID, which can be set during initialization and can listen on multiple ports. There are a few predefined CIDs:

  • VMADDR_CID_HYPERVISOR = 0
  • VMADDR_CID_LOCAL = 1
  • VMADDR_CID_HOST = 2
  • VMADDR_CID_PARENT = 3 (the parent EC2 instance)
  • VMADDR_CID_ANY = 0xFFFFFFFF = -1U (listen on all CIDs)

Enclaves usually use only the VMADDR_CID_PARENT CID (to send data) and the VMADDR_CID_ANY CID (to listen for data). An example use of the VMADDR_CID_PARENT can be found in the init.c module of AWS’s enclaves SDK—the enclave sends a “heartbeat” signal to the parent EC2 instance just after initialization. The signal is handled by the nitro-cli tool.

Standard socket-related issues are the main issues to worry about when it comes to vsocks. When developing an enclave, consider the following to ensure such issues cannot enable certain attack vectors:

  • Does the enclave accept connections asynchronously (with multithreading)? If not, a single user may block other users from accessing the enclave for a long period of time.
  • Does the enclave time out connections? If not, a single user may persistently occupy a socket or open multiple connections to the enclave and drain available resources (like file descriptors).
  • If the enclave uses multithreading, is its state synchronization correctly implemented?
  • Does the enclave handle errors correctly? Reading from a socket with the recv method is especially tricky. A common pattern is to loop over the recv call until the desired number of bytes is received, but this pattern should be carefully implemented:
    • If the EINTR error is returned, the enclave should retry the recv call. Otherwise, the enclave may drop valid and live connections.
    • If there is no error but the returned length is 0, the enclave should break the loop. Otherwise, the peer may shut down the connection before sending the expected number of bytes, making the enclave loop infinitely.
    • If the socket is non-blocking, then reading data correctly is even more tricky.

The main risk of these issues is DoS. The parent EC2 instance may shut down any of its enclaves, so the actual risks are present only if a DoS can be triggered by external users. Providing timely access to the system is the responsibility of both the enclave and the EC2 instance communicating with the enclave.

Another vulnerability class involving vsocks is CID confusion: if an EC2 instance runs multiple enclaves, it may send data to the wrong one (e.g., due to a race condition issue). However, even if such a bug exists, it should not pose much risk or contribute much to an enclave’s attack surface, because traffic between users and the enclave should be authenticated end to end.

Finally, note that enclaves use the SOCK_STREAM socket type by default. If you change the type to SOCK_DGRAM, do some research to learn about the security properties of this communication type.

Randomness

Enclaves must have access to secure randomness. The word “secure” in this context means that adversaries don’t know or control all the entropy used to produce random data. On Linux, a few entropy sources are mixed together by the kernel. Among them are the CPU-provided RDRAND/RDSEED source and platform-provided hardware random number generators (RNGs). The AWS Nitro Trusted Platform Module provides its own hardware RNG (called nsm-hwrng).

Figure 2: Randomness sources in the Linux kernel

The final randomness can be obtained via the getrandom system call or from (less reliable) /dev/{u}random devices. There is also the /dev/hwrng device, which gives more direct access to the selected hardware RNG. This device should not be used by user-space applications.

When a new hardware RNG is registered by the kernel, it is used right away to add entropy to the system. A list of available hardware RNGs can be found in the /sys/class/misc/hw_random/rng_available file. One of the registered RNGs is selected automatically to periodically add entropy and is indicated in the /sys/devices/virtual/misc/hw_random/rng_current file.

We recommend configuring your enclaves to explicitly check that the current RNG (rng_current) is set to nsm-hwrng. This check will ensure that the AWS Nitro RNG was successfully registered and that it’s the one the kernel uses periodically to add entropy.

To further boost the security of your enclave’s randomness, have it pull entropy from external sources whenever there are convenient sources available. A common external source is the AWS Key Management Service, which provides a convenient GenerateRandom method that enclaves can use to bring in entropy over an encrypted channel.

If you want to follow NIST/AIS standards (see section 5.3.1 in “Documentation and Analysis of the Linux Random Number Generator”) or suspect issues with the RDRAND/RDSEED instructions (see also this LWNet article and this tweet), you can disable the random.trust_{bootloader,cpu} kernel parameters. That will inform the kernel not to include these sources for estimation of available entropy.

Lastly, make sure that your enclaves use a kernel version greater than 5.17.12important changes were introduced to the kernel’s random algorithm.

Side channels

Application-level timing side-channel attacks are a threat to enclaves, as they are to any application. Applications running inside enclaves must process confidential data in constant time. Attacks from the parent EC2 instance can use almost system-clock-precise time measurements, so don’t count on network jitter for mitigations. You can read more about timing attack vectors in our blog post “The life of an optimization barrier.”

Also, though this doesn’t really constitute a side-channel attack, error messages returned by an enclave can be used by attackers to reason about the enclave’s state. Think about issues like padding oracles and account enumeration. We recommend keeping errors returned by enclaves as generic as possible. How generic errors should be will depend on the given business requirements, as users of any application will need some level of error tracing.

CPU memory side channels

The main type of side-channel attack to know about involves CPU memory. CPUs share some memory—most notably the cache lines. If memory is simultaneously accessible to two components from different trust zones—like an enclave and its parent EC2 instance—then it may be possible for one component to indirectly leak the other component’s data via measurements of memory access patterns. Even if an application processes secret data in constant time, attackers with access to this type of side channel can exploit data-dependent branching.

In a typical architecture, CPUs can be categorized into NUMA nodes, CPU cores, and CPU threads. The smallest physical processing unit is the CPU core. The core may have multiple logical threads (virtual CPUs)—the smallest logical processing units—and threads share L1 and L2 cache lines. The L3 line (also called the last-level cache) is shared between all cores in a NUMA node.

Figure 3: Example CPU arrangement of a system, obtained by the lstopo command

Parent EC2 instances may have been allocated only a few CPU cores from a NUMA node. Therefore, they may share an L3 cache with other instances. However, the AWS white paper “The Security Design of the AWS Nitro System” claims that the L3 cache is never shared simultaneously. Unfortunately, there is not much more information on the topic.

Figure 4: An excerpt from the AWS white paper, stating that instances with one-half the max amount of CPUs should fill a whole CPU core (socket?)

What about CPUs in enclaves? CPUs are taken from the parent EC2 instance and assigned to an enclave. According to the AWS and nitro-cli source code, the hypervisor enforces the following:

  • The CPU #0 core (all its threads) is not assignable to enclaves.
  • Enclaves must use full cores.
  • All cores assigned to an enclave must be from the same NUMA node.

In the worst case, an enclave will share the L3 cache with its parent EC2 instance (or with other enclaves). However, whether the L3 cache can be used to carry out side-channel attacks is debatable. On one hand, the AWS white paper doesn’t make a big deal of this attack vector. On the other hand, recent research indicates the practicality of such an attack (see “Last-Level Cache Side-Channel Attacks Are Feasible in the Modern Public Cloud”).

If you are very concerned about L3 cache side-channel attacks, you can run the enclave on a full NUMA node. To do so, you would have to allocate more than one full NUMA node to the parent EC2 instance so that one NUMA node can be used for the enclave while saving some CPUs on the other NUMA node for the parent. Note that this mitigation is resource-inefficient and costly.

Alternatively, you can experiment with Intel’s Cache Allocation Technology (CAT) to isolate the enclave’s L3 cache (see the intel-cmt-cat software) from the parent. Note, however, that we don’t know whether CAT can be changed dynamically for a running enclave—that would render this solution unuseful.

If you implement any of the above mitigations, you will have to add relevant information to the attestation. Otherwise, users won’t be able to ensure that the L3 side-channel attack vector was really mitigated.

Anyway, you want your security-critical code (like cryptography) to be implemented with secrets-independent memory access patterns. Both hardware- and software-level security controls are important here.

Memory

Memory for enclaves is carved out from parent EC2 instances. It is the hypervisor’s responsibility to protect access to an enclave’s memory and to clear it after it’s returned to the parent. When it comes to enclave memory as an attack vector, developers really only need to worry about DoS attacks. Applications running inside an enclave should have limits on how much data external users can store. Otherwise, a single user may be able to consume all of an enclave’s available memory and crash the enclave (try running cat /dev/zero inside the enclave to see how it behaves when a large amount of memory is consumed).

So how much space does your enclave have? The answer is a bit complicated. First of all, the enclave’s init process doesn’t mount a new root filesystem, but keeps the initial initramfs and chroots to a directory (though there is a pending PR that will change this behavior once merged). This puts some limits on the filesystem’s size. Also, data saved in the filesystem will consume available RAM.

You can check the total available RAM and filesystem space by executing the free command inside the enclave. The filesystem’s size limit should be around 40–50% of that total space. You can confirm that by filling the whole filesystem’s space and checking how much data ends up being stored there:

dd count=9999999999 if=/dev/zero > /fillspace
du -h -d1 /

Another issue with memory is that the enclave doesn’t have any persistent storage. Once it is shut down, all its data is lost. Moreover, AWS Nitro doesn’t provide any specific data sealing mechanism. It’s your application’s responsibility to implement it. Read our blog post “A trail of flipping bits” for more information.

Time

A less common source of security issues is an enclave’s time source—namely, from where the enclave gets its time. An attacker who can control an enclave’s time could perform rollback and replay attacks. For example, the attacker could switch the enclave’s time to the past and make the enclave accept expired TLS certificates.

Getting a trusted source of time may be a somewhat complex problem in the space of confidential computing. Fortunately, enclaves can rely on the trusted hypervisor for delivery of secure clock sources. From the developer’s side, there are only three actions worth taking to improve the security and correctness of your enclave’s time sources:

  • Ensure that current_clocksource is set to kvm-clock in the enclave’s kernel configuration; consider even adding an application-level runtime check for the clock (in case something goes wrong during enclave bootstrapping and it ends up with a different clock source).
  • Enable the Precision Time Protocol for better clock synchronization between the enclave and the hypervisor. It’s like the Network Time Protocol (NTP) but works over a hardware connection. It should be more secure (as it has a smaller attack surface) and easier to set up than the NTP.
  • For security-critical functionalities (like replay protections) use Unix time. Be careful with UTC and time zones, as daylight saving time and leap seconds may “move time backwards.”

Why kvm-clock?

Machines using an x86 architecture can have a few different sources of time. We can use the following command to check the sources available to enclaves:

cat /sys/devices/system/clocksource/clocksource0/available_clocksource

Enclaves should have two sources: tsc and kvm-clock (you can see them if you run a sample enclave and check its sources); the latter is enabled by default, as can be checked in the current_clocksource file. How do these sources work?

The TSC mechanism is based on the Time Stamp Counter register. It is a per-CPU monotonic counter implemented as a model-specific register (MSR). Every (virtual) CPU has its own register. The counter increments with every CPU cycle (more or less). Linux computes the current time based on the counter scaled by the CPU’s frequency and some initial date.

We can read (and write!) TSC values if we have root privileges. To do so, we need the TSC’s offset (which is 16) and its size (which is 8 bytes). MSR registers can be accessed through the /dev/cpu device:

dd iflag=count_bytes,skip_bytes count=8 skip=16 if=/dev/cpu/0/msr
    dd if=<(echo "34d6 f1dc 8003 0000" | xxd -r -p) of=/dev/cpu/0/msr seek=16 oflag=seek_bytes

The TSC can also be read with the clock_gettime method using the CLOCK_MONOTONIC_RAW clock ID, and with the RDTSC assembly instruction.

Theoretically, if we change the TSC, the wall clock reported by clock_gettime with the CLOCK_REALTIME clock ID, by the gettimeofday function, and by the date command should change. However, the Linux kernel works hard to try to make TSCs behave reasonably and be synchronized with each other (for example, check out the tsc watchdog code and functionality related to the MSR_IA32_TSC_ADJUST register). So breaking the clock is not that easy.

The TSC can be used to track time elapsed, but where do enclaves get the “some initial date” from which the time elapsed is counted? Usually, in other systems, that date is obtained using the NTP. However, enclaves do not have out-of-the-box access to the network and don’t use the NTP (see slide 26 of this presentation from AWS’s 2020 re:Invent conference).

Figure 5: Possible sources of time for an enclave

With the tsc clock and no NTP, the initial date is somewhat randomly selected—the truth is we haven’t determined where it comes from. You can force an enclave to boot without the kvm-clock by passing the no-kvmclock no-kvmclock-vsyscall kernel parameters (but note that these parameters should not be provided at runtime) and check the initial date for yourself. In our experiments, the date was:

Tue Nov 30 00:00:00 UTC 1999

As you can see, the TSC mechanism doesn’t work well with enclaves. Moreover, it breaks badly when the machine is virtualized. Because of that, AWS introduced the kvm-clock as the default source of time for enclaves. It is an implementation of the paravirtual clock driver (pvclock) protocol (see this article and this blog post for more info on pvclock). With this protocol, the host (the AWS Nitro hypervisor in our case) provides the pvclock_vcpu_time_info structure to the guest (the enclave). The structure contains information that enables the guest to adjust its time measurements—most notably, the host’s wall clock (system_time field), which is used as the initial date.

Interestingly, the guest’s userland applications can use the TSC mechanism even if the kvm-clock is enabled. That’s because the RDTSC instruction is (usually) not emulated and therefore may provide non-adjusted TSC register readings.

Please note that if your enclaves use different clock sources or enable NTP, you should do some additional research to see if there are related security issues.

Attestation

Cryptographic attestation is the source of trust for end users. It is essential that users correctly parse and validate attestations. Fortunately, AWS provides good documentation on how to consume attestations.

The most important attestation data is protocol-specific, but we have a few generally applicable tips for developers to keep in mind (in addition to what’s written in the AWS documentation):

  • The enclave should enforce a minimal nonce length.
  • xUsers should check the timestamp provided in the attestation in addition to nonces.
  • The attestation’s timestamp should not be used to reason about the enclave’s time. This timestamp may differ from the enclave’s time, as the former is generated by the hypervisor, and the latter by whatever clock source the enclave is using.
  • Don’t use RSA for the public_key feature.

The NSM driver

Your enclave applications will use the NSM driver, which is accessible via the /dev/nsm node. Its source code can be found in the aws-nitro-enclaves-sdk-bootstrap and kernel repositories. Applications communicate with the driver via the IOCTL system call and can use the nsm-api library to do so.

Developers should be aware that applications running inside an enclave may misuse the driver or the library. However, there isn’t much that can go wrong if developers take these steps:

  • The driver lets you extend and lock more platform configuration registers (PCRs) than the basic 0–4 and 8 PCRs. Locked PCRs cannot be extended, and they are included in enclave attestations. How these additional PCRs are used depends on how you configure your application. Just make sure that it distinguishes between locked and unlocked ones.
  • Remember to make the application check the PCRs’ lock state properties when sending the DescribePCR request to the NSM driver. Otherwise, it may be consulting a PCR that may still be manipulated.
  • Requests and responses are CBOR-encoded. Make sure to get the encoding right. Incorrectly decoded responses may provide false data to your application.
  • It is not recommended to use the nsm_get_random method directly. It skips the kernel’s algorithm for mixing multiple entropy sources and therefore is more prone to errors. Instead, use common randomness APIs (like getrandom).
  • The nsm_init method returns -1 on error, which is an unusual behavior in Rust, so make sure your application accounts for that.

That’s (not) all folks

Securing AWS Nitro Enclaves requires vigilance across multiple attack vectors. By implementing the recommendations in this post—from hardening virtual sockets to verifying randomness sources—you can significantly reduce the risk of compromise to your enclave workloads, helping shape a more secure future for confidential computing.

Key takeaways:

  1. Treat enclaves as a single trust zone and implement end-to-end security.
  2. Mitigate side-channel risks through proper CPU allocation and constant-time processing.
  3. Verify enclave entropy sources in the runtime.
  4. Use the right time sources inside the enclave.
  5. Implement robust attestation practices, including nonce and timestamp validation.

For more security considerations, see our first post on enclave images and attestation. If your enclave uses external systems—like AWS Key Management Service or AWS Certificate Manager—review the systems and supporting tools for additional security footguns.

We encourage you to critically evaluate your own Nitro Enclave deployments. Trail of Bits offers in-depth security assessments and custom hardening strategies for confidential computing environments. If you’re ready to take your Nitro Enclaves’ security to the next level, contact us to schedule a consultation with our experts and ensure that your sensitive workloads remain truly confidential.

Read the whole story
bernhardbock
15 days ago
reply
Share this story
Delete

Multiple Anchors | CSS-Tricks

1 Share

Only Chris, right? You’ll want to view this in a Chromium browser:

This is exactly the sort of thing I love, not for its practicality (cuz it ain’t), but for how it illustrates a concept. Generally, tutorials and demos try to follow the “rules” — whatever those may be — yet breaking them helps you understand how a certain thing works. This is one of those.

The concept is pretty straightforward: one target element can be attached to multiple anchors on the page.

<div class="anchor-1"></div>
<div class="anchor-2"></div>
<div class="target"></div>

We’ve gotta register the anchors and attach the .target to them:

.anchor-1 {
  anchor-name: --anchor-1;
}

.anchor-2 {
  anchor-name: --anchor-2;
}

.target {
  
}

Wait, wait! I didn’t attach the .target to the anchors. That’s because we have two ways to do it. One is using the position-anchor property.

.target {
  position-anchor: --anchor-1;
}

That establishes a target-anchor relationship between the two elements. But it only accepts a single anchor value. Hmm. We need more than that. That’s what the anchor() function can do. Well, it doesn’t take multiple values, but we can declare it multiple times on different inset properties, each referencing a different anchor.

.target {
  top: anchor(--anchor-1, bottom);
}

The second piece of anchor()‘s function is the anchor edge we’re positioned to and it’s gotta be some sort of physical or logical insettop, bottom, start, end, inside, outside, etc. — or percentage. We’re bascially saying, “Take that .target and slap it’s top edge against --anchor-1‘s bottom edge.

That also works for other inset properties:

.target {
  top: anchor(--anchor-1 bottom);
  left: anchor(--anchor-1 right);
  bottom: anchor(--anchor-2 top);
  right: anchor(--anchor-2 left);
}

Notice how both anchors are declared on different properties by way of anchor(). That’s rad. But we aren’t actually anchored yet because the .target is just like any other element that participates in the normal document flow. We have to yank it out with absolute positioning for the inset properties to take hold.

.target {
  position: absolute;

  top: anchor(--anchor-1 bottom);
  left: anchor(--anchor-1 right);
  bottom: anchor(--anchor-2 top);
  right: anchor(--anchor-2 left);
}

In his demo, Chris cleverly attaches the .target to two <textarea> elements. What makes it clever is that <textarea> allows you to click and drag it to change its dimensions. The two of them are absolutely positioned, one pinned to the viewport’s top-left edge and one pinned to the bottom-right.

If we attach the .target's top and left edges to --anchor-1‘s bottom and right edges, then attach the target's bottom and right edges to --anchor-2‘s top and left edges, we’re effectively anchored to the two <textarea> elements. This is what allows the .target element to stretch with the <textarea> elements when they are resized.

But there’s a small catch: a <textarea> is resized from its bottom-right corner. The second <textarea> is positioned in a way where the resizer isn’t directly attached to the .target. If we rotate(180deg), though, it’s all good.

Again, you’ll want to view that in a Chromium browser at the time I’m writing this. Here’s a clip instead if you prefer.

That’s just a background-color on the .target element. We can put a little character in there instead as a background-image like Chris did to polish this off.

Fun, right?! It still blows my mind this is all happening in CSS. It wasn’t many days ago that something like this would’ve been a job for JavaScript.

Direct Link →

Read the whole story
bernhardbock
15 days ago
reply
Share this story
Delete

Scaling virtio-blk disk I/O with IOThread Virtqueue Mapping

1 Share

This article covers the IOThread Virtqueue Mapping feature for Kernel-based virtual machine (KVM) guests that was introduced in Red Hat Enterprise Linux (RHEL) 9.4.

The problem

Modern storage evolved to keep pace with growing numbers of CPUs by providing multiple queues through which I/O requests can be submitted. This allows CPUs to submit I/O requests and handle completion interrupts locally. The result is good performance and scalability on machines with many CPUs.

Although virtio-blk devices in KVM guests have multiple queues by default, they do not take advantage of multi-queue on the host. I/O requests from all queues are processed in a single thread on the host for guests with the <driver io=native …> libvirt domain XML setting. This single thread can become a bottleneck for I/O bound workloads.

KVM guests can now benefit from multiple host threads for a single device through the new IOThread Virtqueue Mapping feature. This improves I/O performance for workloads where the single thread is a bottleneck. Guests with many vCPUs should use this feature to take advantage of additional capacity provided by having multiple threads.

If you are interested in the QEMU internals involved in developing this feature, you can find out more in this blog post and this KVM Forum presentation. Making QEMU’s block layer thread safe was a massive undertaking that we are proud to have contributed upstream.

How IOThread Virtqueue Mapping works

IOThread Virtqueue Mapping lets users assign individual virtqueues to host threads, called IOThreads, so that a virtio-blk device is handled by more than one thread. Each virtqueue can be assigned to one IOThread.

Most users will opt for round-robin assignment so that virtqueues are automatically spread across a set of IOThreads. Figure 1 illustrates how 4 queues are assigned in round-robin fashion across 2 IOThreads.

A depiction of a virtio-blk device with four queues assigned to two IOThreads. Queue 1 and Queue 3 are green and assigned to IOThread 1. Queue 2 and Queue 4 are red and assigned to IOThread 2.
Figure 1: A virtio-blk device with 4 queues assigned to 2 IOThreads.

The libvirt domain XML for this configuration looks like this:

<domain>
  …
  <vcpu>4</vcpu>
  <iothreads>2</iothreads>
  …
  <devices>
    <disk …>
      <driver name='qemu' cache=’none’ io=’native’ …>
        <iothreads>
          <iothread id='1'></iothread>
          <iothread id='2'></iothread>
        </iothreads>

More details on the syntax can be found in the libvirt documentation.

Configuration tips

The following recommendations are based on our experience developing and benchmarking this feature:

  • Use 4-8 IOThreads. Usually this is sufficient to saturate disks. Adding more threads beyond the point of saturation does not increase performance and may harm it.

  • Share IOThreads between devices unless you know in advance that certain devices are heavily utilized. Keeping a few IOThreads busy but not too busy is ideal.

  • Pin IOThreads away from vCPUs with <iothreadpin> and <vcpupin> if you have host CPUs to spare. IOThreads need to respond quickly when the guest submits I/O. Therefore they should not compete for CPU time with the guest’s vCPU threads.

  • Use <driver io=”native” cache=”none” …>. IOThread Virtqueue Mapping was designed for io=”native”. Using io=”threads” is not recommended as it does not combine with IOThread Virtqueue Mapping in a useful way.

Performance

The following random read disk I/O benchmark compares IOThread Virtqueue Mapping with 2 and 4 IOThreads against a guest without IOThread Virtqueue Mapping (only 1 IOThread). The guest was configured with 8 vCPUs all submitting I/O in parallel. See Figure 2.

A bar graph depicting random read disk I/O benchmark comparing IOThread Virtqueue Mapping with 2 and 4 IOThreads against a guest without IOThread Virtqueue Mapping (only 1 IOThread). The y axis is labeled iops and the x axis is labeled iodepth.
Figure 2: Random read 4 KB benchmark results for iodepth 1 and 64 with IOPS increasing when comparing 1, 2, and 4 IOThreads.

The most important fio benchmark options are shown here:

fio --ioengine=libaio –rw=randread –bs=4k --numjobs=8 --direct=1
    --cpus_allowed=0-7 --cpus_allowed_policy=split

This microbenchmark shows that when 1 IOThread is unable to saturate a disk, adding more IOThreads with IOThread Virtqueue Mapping is a significant improvement. Virtqueues were assigned round-robin to the IOThreads. The disk was an Intel Optane SSD DC P4800X and the guest was running Fedora 39 x86_64. The libvirt domain XML, fio options, benchmark output, and an Ansible playbook are available here.

Real workloads may benefit less depending on how I/O bound they are and whether they submit I/O from multiple vCPUs. We recommend benchmarking your workloads to understand the effect of IOThread Virtqueue Mapping.

A companion blog post explores database performance with IOThread Virtqueue Mapping.

Conclusion

The new IOThread Virtqueue Mapping feature in RHEL 9.4 improves scalability of disk I/O for guests with many vCPUs. Enabling this feature on your KVM guests with virtio-blk devices can boost performance of I/O bound workloads.

The post Scaling virtio-blk disk I/O with IOThread Virtqueue Mapping appeared first on Red Hat Developer.

Read the whole story
bernhardbock
28 days ago
reply
Share this story
Delete

mTLS: When certificate authentication is done wrong

2 Shares

Although X.509 certificates have been here for a while, they have become more popular for client authentication in zero-trust networks in recent years. Mutual TLS, or authentication based on X.509 certificates in general, brings advantages compared to passwords or tokens, but you get increased complexity in return.

In this post, I’ll deep dive into some interesting attacks on mTLS authentication. We won’t bother you with heavy crypto stuff, but instead we’ll have a look at implementation vulnerabilities and how developers can make their mTLS systems vulnerable to user impersonation, privilege escalation, and information leakages.

We will present some CVEs we found in popular open-source identity servers and ways to exploit them. Finally, we’ll explain how these vulnerabilities can be spotted in source code and how to fix them.

This blog post is based on work that I recently presented at Black Hat USA and DEF CON.

Introduction: What is mutual TLS?

Website certificates are a very widely recognized technology, even to people who don’t work in the tech industry, thanks to the padlock icon used by web browsers. Whenever we connect to Gmail or GitHub, our browser checks the certificate provided by the server to make sure it’s truly the service we want to talk to. Fewer people know that the same technology can be used to authenticate clients: the TLS protocol is also designed to be able to verify the client using public and private key cryptography.

It happens on the handshake level, even before any application data is transmitted:

Excerpt from RFC 5246: "Figure 1. Message flow for a full handshake"

If configured to do so, a server can ask a client to provide a security certificate in the X.509 format. This certificate is just a blob of binary data that contain information about the client, such as its name, public key, issuer, and other fields:

$ openssl x509 -text -in client.crt
Certificate:
    Data:
        Version: 1 (0x0)
        Serial Number:
            d6:2a:25:e3:89:22:4d:1b
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=localhost            //used to locate issuers certificate
        Validity
            Not Before: Jun 13 14:34:28 2023 GMT
            Not After : Jul 13 14:34:28 2023 GMT
        Subject: CN=client          //aka "user name"
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Modulus:
                    00:9c:7c:b4:e5:e9:3d:c1:70:9c:9d:18:2f:e8:a0:

The server checks that this certificate is signed by one of the trusted authorities. This is a bit similar to checking the signature of a JWT token. Next, the client sends a “Certificate verify” message encrypted with the private key, so that the server can verify that the client actually has the private key.

How certificates are validated

“Certificate validation” commonly refers to the PKIX certificate validation process defined in RFC 5280.

In short, in order to validate the certificate, the server constructs a certification path (also known as a certificate chain) from the target certificate to a trust anchor. The trust anchor is a self-signed root certificate that is inherently trusted by the validator. The end entity certificate is often signed by an intermediate CA, which is also signed by another intermediate certificate or directly by a trust anchor.

Diagram of certificate chain with three links: Client certificate, Intermediate CA, Root Certificate Authority

Then, for each certificate in the chain, the validator checks the signature, validity period, allowed algorithms and key lengths, key usage, and other properties. There are also a number of optional certificate extensions: if they are included in the certificate, they can be checked as well. This process is quite complicated, so every language or library implements it differently.

Note: in my research I mostly looked at how mTLS is implemented in applications written in Java, but it is likely that the ideas and attacks below apply to other languages as well.

mTLS in a Java web application, an example

Let’s see how to use mTLS in a Java web application. The bare minimum configuration is to enable it in the application settings and specify the location of all trusted root certificates, like this:

$ cat application.properties

…
server.ssl.client-auth=need
server.ssl.trust-store=/etc/spring/server-truststore.p12
server.ssl.trust-store-password=changeit

From the client, such as curl, you need to specify which certificate is sent to the server. The rest of the application code, such as request mappings, is exactly the same as for a normal web application.

$ curl -k -v –cert client.pem http://localhost/hello

This setup works for very simple mTLS configurations, when there is only a single root certificate, and all client certificates are signed by it. You can find this example in various articles on the web and it’s quite secure due to its simplicity. Let’s quickly break down its pros and cons.

Pros:

  • Speed: Authorization happens only during TLS handshake, all subsequent “keep-alive” HTTP requests are considered authenticated, saving CPU time.
  • Storage: Similar to JWT, the server does not store all client certificates, only the root certificate.

Cons:

  • No granular control: if mTLS is enabled, all requests have to be authenticated, even to /static/style.css.
  • Any certificate signed by a trusted CA can be used to access this HTTP service. Even if the certificate is issued for another purpose, it still can potentially be used for TLS authentication.
  • No host verification by default: client certificates can be accepted from any IP.
  • Certificate issuance process needs to be implemented separately.
  • Certificates expire, so need to be rotated frequently.

As you can see, this approach brings some advantages and disadvantages compared to traditional authentication methods, such as password or tokens.

Previous attacks

Before we dive into the attack section, I’ll briefly mention some previous well-known attacks on certificate parsing and validation:

  • Obviously, the security of the authentication system depends on the strength of the signature. If we can somehow forge the content of the certificate, but keep the same signature, we can completely break the authentication process.
  • Since the X.509 format is quite complex, just parsing these data structures can lead to buffer and heap overflows.
  • Lack of basic constraints checking. The end-entity certificates should not be used to sign additional certificates.

My approach

In Java, most of these attacks are already mitigated in APIs provided by the JDK. Weak algorithms are intentionally not allowed. Fuzzing of certificate parsing in Java also did not look productive to me, as the vast majority of PKIX code is implemented in memory-safe Java, instead of using native libraries. I had to take a different approach, so I decided to have a deep look at how mTLS is used from the source code perspective. Since the certificate validation process is quite complex, I suspected that someone might implement it in a weird way. After several weeks, it yielded me some interesting vulnerabilities in popular open source projects.

So, let’s move on to the attack’s section.

Chapter 1: Improper certificate extraction

In real-life applications, developers often need to access the certificate presented during the TLS handshake. For example, they might need it for authorization purposes, such as checking the current username. In Java, there are two common ways how to access it:

X509Certificate[] certificates = sslSession.getPeerCertificates();  

// another way
X509Certificate[] certificates = request.getAttribute("javax.servlet.request.X509Certificate");

Interestingly, this API returns an array of certificates presented by the client, not a single one. Why? Perhaps because TLS specification defines that clients may send a full chain of certificates, from end-entity to the root CA.

So, I decided to take a look at how different applications use this API. The most common approach I’ve seen is to take only the first certificate from the array and consider it as the client certificate. This is correct, as mTLS RFC explicitly says that the sender’s certificate MUST come first in the list.

//way 1 is good
String user = certificates[0].getSubjectX500Principal().getName();

At the same time, I discovered some rare cases when applications disregard this rule and iterate over the array trying to find a certificate that matches some criteria.

//way 2 is dangerous
for (X509Certificate cert : certificates) {
   if (isClientCertificate(cert)) {
      user = cert.getSubjectX500Principal().getName();
   }
}

This is dangerous, as the underlying TLS library in Java only verifies the first certificate in the list. Moreover, it does not require the chain to be sent in a strict order.

Example: CVE-2023-2422 improper certificate validation in KeyCloak

One of these examples was a vulnerability I discovered in Keycloak. Keycloak is a popular authorization server that supports OAuth, SAML, and other authorization methods, as well as mutual TLS.

Keycloak iterates over all certificates in the array, searching for the one that matches the client_id form parameter. As soon as it finds a matching certificate, it implicitly trusts it, assuming that its signature has already been checked during the TLS handshake:

X509Certificate[] certs = null;
ClientModel client = null;
try { 
    certs = provider.getCertificateChain(context.getHttpRequest());
    String client_id = null;
    ...
    if (formData != null) {
        client_id = formData.getFirst(OAuth2Constants.CLIENT_ID);
    }
    …
    matchedCertificate = Arrays.stream(certs)
        .map(certificate -> certificate.getSubjectDN().getName())
        .filter(subjectdn -> subjectDNPattern.matcher(subjectdn).matches())
        .findFirst();

In reality, a client can send as many certificates as they want, and the server only verifies the first one.

A potential attacker can exploit this behavior to authenticate under a different username. It is possible to send a list of certificates, where the first one contains one username and is properly chained to a root CA. But the last certificate in the array might be self signed and belong to a different user. The client does not even need to provide a valid private key for it.

Diagram of a certificate list in which the first client certificate is signed by a CA, but the second is self-signed.

Speaking about the exploitation, there are a number of endpoints in Keycloak that support mTLS authentication, but we need one that does not require any additional factors, such as tokens or secrets. “client-management/register-node” is a good example, as it mutates the user’s data. We can normally use this api with mTLS in the following way:

$ cat client1.crt client1.key > chain1.pem
$ curl --tlsv1.2 --tls-max 1.2 --cert chain1.pem -v -i -s -k "https://127.0.0.1:8443/realms/master/clients-managements/register-node?client_id=client1" -d "client_cluster_host=http://127.0.0.1:1213/"

To demonstrate the vulnerability, we generate a new self signed certificate using openssl and add it to the end of the array.

$ openssl req -newkey rsa:2048 -nodes -x509 -subj /CN=client2 -out client2-fake.crt
$ cat client1.crt client1.key client2-fake.crt client1.key > chain2.pem
$ curl --tlsv1.2 --tls-max 1.2 --cert chain2.pem -v -i -s -k "https://127.0.0.1:8443/realms/master/clients-managements/register-node?client_id=client2" -d "client_cluster_host=http://127.0.0.1:1213/"

When we send the second curl request, Keycloak performs this action on behalf of the user specified in client2-fake.crt, instead of client1.crt. Therefore, we can mutate data on the server for any client that supports mTLS.

How to fix that? Easy: just use the first certificate from the array. That’s exactly how Keycloak patched this vulnerability. This CVE is a good example of how developers provide methods and interfaces that can be misunderstood or used incorrectly.

Passing certificate as a header

Another common scenario for mTLS deployments is when the TLS connection is terminated on a reverse proxy. In this case, the reverse proxy often checks the certificate and forwards it to a backend server as an additional header. Here is a typical nginx configuration to enable mTLS:

$ cat nginx.conf

http {
    server {
        server_name example.com;
        listen 443 ssl;
        …
        ssl_client_certificate /etc/nginx/ca.pem;
        ssl_verify_client on;

        location / {
            proxy_pass http://host.internal:80;
            proxy_set_header ssl-client-cert $ssl_client_cert;
        }
    }

I’ve seen a number of systems like that, and in most cases the backend servers behind nginx do not perform additional validation, just trusting the reverse proxy. This behavior is not directly exploitable, but it’s not ideal either. Why? Well, first of all, it means that any server in the local network can make a request with this header, so this network segment needs to be carefully isolated from any traffic coming from outside. Additionally, if the backend or reverse proxy is affected by request smuggling or header injection, its exploitation becomes trivial. Over the past few years, we’ve seen a lot of request and header smuggling vulnerabilities, including the latest CVEs in Netty and Nodejs. Be careful when implementing these scenarios and check the certificate’s signature on all servers if possible.

Chapter 2: “Follow the chain, where does it lead you?”

Excerpt from RFC 4158: "Figure 1 - Sample Hierarchical PKI"

In large systems, servers may not store all root and intermediate certificates locally, but use external storage instead. RFC 4387 explains the concept of a certificate store: an interface you can use to lazily access certificates during chain validation. These stores are implemented over different protocols, such as HTTP, LDAP, FTP, or SQL queries.

RFC 3280 defines some X.509 certificate extensions that can contain information about where to find the issuer and CA certificates. For instance, the Authority Information Access (AIA) extension contains a URL pointing to the Issuer’s certificate. If this extension is used for validation, there is a high chance that you can exploit it to perform an SSRF attack. Also, Subject, Issuer, Serial, and their alternative names can be used to construct SQL or LDAP queries, creating opportunities for injection attacks.

Client certificate with an AIA extension, containing a link to http://example.com

When certificate stores are in use, you should think of these values as “untrusted user input” or “Insertion points,” similar to those we have in Burp Suite’s Intruder. And what attackers will really love is that all of these values can be used in queries before the signature is checked.

Example: CVE-2023-33201 LDAP injection in Bouncy Castle

To demonstrate an example of this vulnerability, we’ll use LDAPCertStore from the Bouncy Castle library. Bouncy Castle is one of the most popular libraries for certificate validation in Java. Here is an example of how you can use this store to build and validate a certificate chain.

PKIXBuilderParameters pkixParams = new PKIXBuilderParameters(keystore, selector);

//setup additional LDAP store
X509LDAPCertStoreParameters CertStoreParameters = new X509LDAPCertStoreParameters.Builder("ldap://127.0.0.1:1389", "CN=certificates").build();
CertStore certStore = CertStore.getInstance("LDAP", CertStoreParameters, "BC");
pkixParams.addCertStore(certStore);

// Build and verify the certification chain
try {
   CertPathBuilder builder = CertPathBuilder.getInstance("PKIX", "BC");
   PKIXCertPathBuilderResult result =
           (PKIXCertPathBuilderResult) builder.build(pkixParams);

Under the hood, Bouncy Castle uses the Subject field from the certificate to build an LDAP query. The Subject field is inserted in the filter, without—you guessed it—any escaping.

Client certificate containing the text "Subject: CN=Client*)(userPassword=123"

So, if the Subject contains any special characters, it can change the syntax of the query. In most cases, this can be exploited as a blind ldap query injection. Therefore, it might be possible to use this vulnerability to extract other fields from the LDAP directory. The exploitability depends on many factors, including whether the application exposes any errors or not, and it also depends on the structure of the LDAP directory.

In general, whenever you incorporate user-supplied data into an LDAP query, special characters should be properly filtered. That’s exactly how this CVE has been patched in the Bouncy Castle code.

Chapter 3: Certificate revocation and its unintended uses

Similar to Json web tokens, the beauty of certificate chains is that they can be trusted just based on their signature. But what happens if we need to revoke a certificate, so it can no longer be used?

The PKIX specification (RFC 4387) addresses this problem by proposing a special store for revoked certificates, accessible via HTTP or LDAP protocols. Many developers believe that revocation checking is absolutely necessary, whereas others urge to avoid it for performance reasons or only use offline revocation lists.

Generally speaking, the store location can be hardcoded into the application or taken from the certificate itself. There are two certificate extensions used for that: Authority Information Access OSCP URL and CRL Distribution points.

Client certificate containing URLs in its AIA OSCL and CRL Distribution points.

Looking at it from the hackers point of view, I think it’s incredible that the location of the revocation server can be taken from the certificate. So, if the application takes URLs relying on AIA or CRLDP extension to make a revocation check, it can be abused for SSRF attacks.

Sadly for attackers, this normally happens after the signature checks, but in some cases it’s still exploitable.

Moreover, LDAP is also supported, at least in Java. You probably heard that, in Java, unmarshaling an LDAP lookup response can lead to a remote code execution. A few years back, Moritz Bechler reported this problem and remote code execution via revocation has since been patched in the JDK. You can check out his blog post for more details.

In my research, I decided to check if the Bouncy Castle library is also affected. It turns out that Bouncy Castle can be configured to use the CRLDP extension and make calls to an LDAP server. At the same time, Bouncy Castle only fetches a specific attribute from the LDAP response and does not support references. So, remote code execution is not possible there. HTTP SSRF is still viable though.

private static Collection getCrlsFromLDAP(CertificateFactory certFact, URI distributionPoint) throws IOException, CRLException
{
    Map<String, String> env = new Hashtable<String, String>();

    env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
    env.put(Context.PROVIDER_URL, distributionPoint.toString());

    byte[] val = null;
    try
    {
        DirContext ctx = new InitialDirContext((Hashtable)env);
        Attributes avals = ctx.getAttributes("");
        Attribute aval = avals.get("certificateRevocationList;binary");
        val = (byte[])aval.get();
    }

Example: CVE-2023-28857 credentials leak in Apereo CAS

I also had a quick look at open source projects that support mTLS and perform revocation checking. One of these projects was Apereo CAS. It’s another popular authentication server that is highly configurable. Administrators of Apereo CAS can enable the revocation check using an external LDAP server by specifying its address and password in the settings:

cas.authn.x509.crl-fetcher=ldap
cas.authn.x509.ldap.ldap-url=ldap://example.com:1389/
cas.authn.x509.ldap.bind-dn=admin
cas.authn.x509.ldap.bind-credential=s3cr3taaaaa

If these settings are applied, Apereo CAS performs the revocation check for the certificate, fetching the address from the certificate’s CRLDP extension.

/**
* Validate the X509Certificate received.
*
* @param cert the cert
* @throws GeneralSecurityException the general security exception
*/
private void validate(final X509Certificate cert) throws GeneralSecurityException {
   cert.checkValidity();
   this.revocationChecker.check(cert);

   val pathLength = cert.getBasicConstraints();
   if (pathLength < 0) {
       if (!isCertificateAllowed(cert)) {
           val msg = "Certificate subject does not match pattern " + this.regExSubjectDnPattern.pattern();
           LOGGER.error(msg);

I was afraid that this could lead to remote code execution, but it turns out that Apereo CAS uses a custom library for LDAP connection, which does not support external codebases or object factories needed for RCE.

When I tested this in Apereo CAS, I noticed one interesting behavior. The server prefers the LDAP URL located inside the certificate, instead of the one that is configured in settings. At the same time, Apereo CAS still sends the password from the settings. I quickly set up a testing environment and sent a self-signed certificate in the header. My self-signed certificate had a CRLDP extension with the LDAP URL pointing to a netcat listener. After sending this request to Apereo CAS, I received a request to my netcat listener with the username and password leaked.

Pair of screenshots: the first contains a POST request to Apereo CAS and the second is a terminal running netcat.

After reporting this vulnerability, the application developers issued a fix within just one day. They patched it by clearing the login and password used for LDAP connection if the URL is taken from the CRLDP. Therefore, the password leak is no longer possible. Nevertheless, I would say that using URLs from the CRLDP extension is still dangerous, as it broadens the attack surface.

Summary

If you’re developing an mTLS system or performing a security assessment, I suggest:

  1. Pay attention when extracting usernames from the mTLS chain, as the servers only verify the first certificate in the chain.
  2. Use Certificate Stores with caution, as it can lead to LDAP and SQL injections.
  3. Certificate revocation can lead to SSRF or even to RCE in the worst case. So, do the revocation check only after all other checks and do not rely on URLs taken from the certificate extensions.

The post mTLS: When certificate authentication is done wrong appeared first on The GitHub Blog.

Read the whole story
bernhardbock
418 days ago
reply
Share this story
Delete

axe-core and shot-scraper for accessibility audits

1 Share

I just watched a talk by Pamela Fox at North Bay Python on Automated accessibility audits. The video should be up within 24 hours.

One of the tools Pamela introduced us to was axe-core, which is a JavaScript library at the heart of a whole ecosystem of accessibility auditing tools.

I figured out how to use it to run an accessibility audit using my shot-scraper CLI tool:

shot-scraper javascript https://datasette.io "
async () => {
  const axeCore = await import('https://cdn.jsdelivr.net/npm/axe-core@4.7.2/+esm');
  return axeCore.default.run();
}
"

The first line loads an ESM build of axe-core from the jsdelivr CDN. I figured out the URL for this by searching jsdelivr and finding their axe-core page.

The second line calls the .run() method, which defaults to returning an enormous JSON object containing the results of the audit.

shot-scraper dumps the return value of tha async() function to standard output in my terminal.

The output started like this:

{
    "testEngine": {
        "name": "axe-core",
        "version": "4.7.2"
    },
    "testRunner": {
        "name": "axe"
    },
    "testEnvironment": {
        "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/115.0.5790.75 Safari/537.36",
        "windowWidth": 1280,
        "windowHeight": 720,
        "orientationAngle": 0,
        "orientationType": "landscape-primary"
    },
    "timestamp": "2023-07-30T18:32:39.591Z",
    "url": "https://datasette.io/",
    "toolOptions": {
        "reporter": "v1"
    },
    "inapplicable": [
        {
            "id": "accesskeys",
            "impact": null,
            "tags": [
                "cat.keyboard",
                "best-practice"
            ],

That inapplicable section goes on for a long time, but it's not actually interesting - it shows all of the audit checks that the page passed.

The most interesting section is called violations. We can filter to just that using jq:

shot-scraper javascript https://datasette.io "
async () => {
  const axeCore = await import('https://cdn.jsdelivr.net/npm/axe-core@4.7.2/+esm');
  return axeCore.default.run();
}
" | jq .violations

Which produced (for my page) an array of four objects, starting like this:

[
  {
    "id": "color-contrast",
    "impact": "serious",
    "tags": [
      "cat.color",
      "wcag2aa",
      "wcag143",
      "ACT",
      "TTv5",
      "TT13.c"
    ],
    "description": "Ensures the contrast between foreground and background colors meets WCAG 2 AA minimum contrast ratio thresholds",
    "help": "Elements must meet minimum color contrast ratio thresholds",
    "helpUrl": "https://dequeuniversity.com/rules/axe/4.7/color-contrast?application=axeAPI",
    "nodes": [
      {
        "any": [
          {
            "id": "color-contrast",
            "data": {
              "fgColor": "#ffffff",
              "bgColor": "#8484f4",
              "contrastRatio": 3.18,
              "fontSize": "10.8pt (14.4px)",
              "fontWeight": "normal",
              "messageKey": null,
              "expectedContrastRatio": "4.5:1",
              "shadowColor": null
            },
            "relatedNodes": [
              {
                "html": "<input type=\"submit\" value=\"Search\">",
                "target": [
                  "input[type=\"submit\"]"
                ]
              }
            ],
            "impact": "serious",
            "message": "Element has insufficient color contrast of 3.18 (foreground color: #ffffff, background color: #8484f4, font size: 10.8pt (14.4px), font weight: normal). Expected contrast ratio of 4.5:1"
          }
        ],

I loaded these into a SQLite database using sqlite-utils:

shot-scraper javascript https://datasette.io "
async () => {
  const axeCore = await import('https://cdn.jsdelivr.net/npm/axe-core@4.7.2/+esm');
  return axeCore.default.run();
}
" | jq .violations \
  | sqlite-utils insert /tmp/v.db violations -                  

Then I ran open /tmp/v.db to open that database in Datasette Desktop.

Datasette running against that new table, faceted by impact and tags

Read the whole story
bernhardbock
427 days ago
reply
Share this story
Delete
Next Page of Stories