Memory leakage question
Markus Moeller
huaraz at moeller.plus.com
Sat May 19 10:27:16 EDT 2007
I have written a tool which processs GSSAPI tokens and loops forever. Since
it may run for a long time I try to check with valgrind that it doesn't leak
memory.
I noticed the following two valgrind messages:
==866== 128 bytes in 4 blocks are still reachable in loss record 2 of 4
==866== at 0x40233F0: malloc (in
/usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==866== by 0x40389AF: register_mech (g_initialize.c:464)
==866== by 0x4038B28: init_hardcoded (g_initialize.c:517)
==866== by 0x403898E: updateMechList (g_initialize.c:452)
==866== by 0x4038F75: gssint_get_mechanism (g_initialize.c:555)
==866== by 0x4030F5B: gss_acquire_cred (g_acquire_cred.c:162)
==866== by 0x8049C6A: main (squid_kerb_auth.c:353)
==866==
==866==
==866== 133 bytes in 1 blocks are definitely lost in loss record 3 of 4
==866== at 0x40233F0: malloc (in
/usr/lib/valgrind/x86-linux/vgpreload_memcheck.so)
==866== by 0x403DEC8: krb5_gss_accept_sec_context
(accept_sec_context.c:822)
==866== by 0x404CD70: k5glue_accept_sec_context (krb5_gss_glue.c:434)
==866== by 0x403094B: gss_accept_sec_context (g_accept_sec_context.c:195)
==866== by 0x8049D05: main (squid_kerb_auth.c:359)
==866==
Looking at g_acquire_cred.c it says
/*
* if desired_mechs equals GSS_C_NULL_OID_SET, then pick an
* appropriate default. We use the first mechanism in the
* mechansim list as the default. This set is created with
* statics thus needs not be freed
*/
if(desired_mechs == GSS_C_NULL_OID_SET) {
mech = gssint_get_mechanism(NULL);
if (mech == NULL)
return (GSS_S_BAD_MECH);
mechs = &default_OID_set;
default_OID_set.count = 1;
default_OID_set.elements = &default_OID;
default_OID.length = mech->mech_type.length;
default_OID.elements = mech->mech_type.elements;
} else
mechs = desired_mechs;
as I use GSS_C_NULL_OID_SET mechs will never be freed or is there a way to
free it from my application ?
The second valgrind message I traced to the output_token in
accept_sec_context.c and I am not sure if I do something wrong.
I use the following:
gss_buffer_desc output_token = GSS_C_EMPTY_BUFFER;
major_status = gss_accept_sec_context(&minor_status,
&gss_context,
my_gss_creds,
&input_token,
GSS_C_NO_CHANNEL_BINDINGS,
&client_name,
NULL,
&output_token,
&ret_flags,
NULL,
&delegated_cred);
gss_release_buffer(&minor_status, &output_token);
In accept_sec_context.c I see the following happening:
gss_buffer_desc token;
.
.
.
.
output_token->length = 0;
output_token->value = NULL;
.
.
.
token.length = g_token_size(mech_used, ap_rep.length);
if ((token.value = (unsigned char *) xmalloc(token.length))
== NULL) {
major_status = GSS_S_FAILURE;
code = ENOMEM;
goto fail;
}
.
.
.
*output_token = token;
So it seems accept_sec_context does not use the allocated output_token but
uses its own allocated token and I am not sure if that will create a memory
problem.
Thanks
Markus
More information about the Kerberos
mailing list