diff options
author | David Hildenbrand <dahi@linux.vnet.ibm.com> | 2015-10-26 08:41:29 +0100 |
---|---|---|
committer | Christian Borntraeger <borntraeger@de.ibm.com> | 2015-10-29 15:58:41 +0100 |
commit | c5c2c393468576bad6d10b2b5fefff8cd25df3f4 (patch) | |
tree | b4e09d33d0adfc57d9b752dde228529c368113f6 /arch/s390/kvm | |
parent | 60417fcc2b0235dfe3dcd589c56dbe3ea1a64c54 (diff) | |
download | talos-obmc-linux-c5c2c393468576bad6d10b2b5fefff8cd25df3f4.tar.gz talos-obmc-linux-c5c2c393468576bad6d10b2b5fefff8cd25df3f4.zip |
KVM: s390: SCA must not cross page boundaries
We seemed to have missed a few corner cases in commit f6c137ff00a4
("KVM: s390: randomize sca address").
The SCA has a maximum size of 2112 bytes. By setting the sca_offset to
some unlucky numbers, we exceed the page.
0x7c0 (1984) -> Fits exactly
0x7d0 (2000) -> 16 bytes out
0x7e0 (2016) -> 32 bytes out
0x7f0 (2032) -> 48 bytes out
One VCPU entry is 32 bytes long.
For the last two cases, we actually write data to the other page.
1. The address of the VCPU.
2. Injection/delivery/clearing of SIGP externall calls via SIGP IF.
Especially the 2. happens regularly. So this could produce two problems:
1. The guest losing/getting external calls.
2. Random memory overwrites in the host.
So this problem happens on every 127 + 128 created VM with 64 VCPUs.
Cc: stable@vger.kernel.org # v3.15+
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Diffstat (limited to 'arch/s390/kvm')
-rw-r--r-- | arch/s390/kvm/kvm-s390.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 618c85411a51..35596177fad2 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1098,7 +1098,9 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) if (!kvm->arch.sca) goto out_err; spin_lock(&kvm_lock); - sca_offset = (sca_offset + 16) & 0x7f0; + sca_offset += 16; + if (sca_offset + sizeof(struct sca_block) > PAGE_SIZE) + sca_offset = 0; kvm->arch.sca = (struct sca_block *) ((char *) kvm->arch.sca + sca_offset); spin_unlock(&kvm_lock); |