Codebase list krb5 / 100e4aa
Make zap() more reliable The gcc assembly version of zap() could still be optimized out under gcc 5.1 or later, and the krb5int_zap() function could be optimized out with link-time optimization. Based on work by Zhaomo Yang and Brian Johannesmeyer, use the C11 memset_s() when available, then fall back to a memory barrier with gcc or clang, and finally fall back to using krb5int_zap(). Modify krb5int_zap() to use a volatile pointer in case link-time optimization is used. (cherry picked from commit c163275f899b201dc2807b3ff2949d5e2ee7d838) ticket: 8514 version_fixed: 1.15 Greg Hudson authored 7 years ago Tom Yu committed 7 years ago
3 changed file(s) with 29 addition(s) and 21 deletion(s). Raw diff Collapse all Expand all
5252 dnl Consider using AC_USE_SYSTEM_EXTENSIONS when we require autoconf
5353 dnl 2.59c or later, but be sure to test on Solaris first.
5454 AC_DEFINE([_GNU_SOURCE], 1, [Define to enable extensions in glibc])
55 AC_DEFINE([__STDC_WANT_LIB_EXT1__], 1, [Define to enable C11 extensions])
56
5557 WITH_CC dnl
5658 AC_REQUIRE_CPP
5759 if test -z "$LD" ; then LD=$CC; fi
651651 */
652652 #ifdef _WIN32
653653 # define zap(ptr, len) SecureZeroMemory(ptr, len)
654 #elif defined(__GNUC__)
654 #elif defined(__STDC_LIB_EXT1__)
655 /*
656 * Use memset_s() which cannot be optimized out. Avoid memset_s(NULL, 0, 0, 0)
657 * which would cause a runtime constraint violation.
658 */
655659 static inline void zap(void *ptr, size_t len)
656660 {
657 memset(ptr, 0, len);
658 /*
659 * Some versions of gcc have gotten clever enough to eliminate a
660 * memset call right before the block in question is released.
661 * This (empty) asm requires it to assume that we're doing
662 * something interesting with the stored (zero) value, so the
663 * memset can't be eliminated.
664 *
665 * An optimizer that looks at assembly or object code may not be
666 * fooled, and may still cause the memset to go away. Address
667 * that problem if and when we encounter it.
668 *
669 * This also may not be enough if free() does something
670 * interesting like purge memory locations from a write-back cache
671 * that hasn't written back the zero bytes yet. A memory barrier
672 * instruction would help in that case.
673 */
674 asm volatile ("" : : "g" (ptr), "g" (len));
661 if (len > 0)
662 memset_s(ptr, len, 0, len);
663 }
664 #elif defined(__GNUC__) || defined(__clang__)
665 /*
666 * Use an asm statement which declares a memory clobber to force the memset to
667 * be carried out. Avoid memset(NULL, 0, 0) which has undefined behavior.
668 */
669 static inline void zap(void *ptr, size_t len)
670 {
671 if (len > 0)
672 memset(ptr, 0, len);
673 __asm__ __volatile__("" : : "r" (ptr) : "memory");
675674 }
676675 #else
677 /* Use a function from libkrb5support to defeat inlining. */
676 /*
677 * Use a function from libkrb5support to defeat inlining unless link-time
678 * optimization is used. The function uses a volatile pointer, which prevents
679 * current compilers from optimizing out the memset.
680 */
678681 # define zap(ptr, len) krb5int_zap(ptr, len)
679682 #endif
680683
3333
3434 void krb5int_zap(void *ptr, size_t len)
3535 {
36 memset(ptr, 0, len);
36 volatile char *p = ptr;
37
38 while (len--)
39 *p++ = '\0';
3740 }