Delegation Token认证流程
  GQ7psP7UJw7k 2023年11月05日 42 0

1. 背景

https://blog.51cto.com/u_15327484/8153877文章中介绍了Hadoop中使用kerberos机制进行认证。在客户端初次访问服务端时,通过JAAS获取TGT,再通过GSSAPI on SASL获取service ticket完成认证。

在用户向Yarn提交作业时,如果作业有上万个container,每个container都会访问HDFS的NameNode,那么就有两个问题:

  1. 要将keytab文件发送给每一个container,让每个container进行认证,这容易造成keytab文件泄漏,产生安全风险。
  2. 每个container访问NameNode前,都会向KDC进行认证,可能造成KDC性能瓶颈。

为了解决这个问题,Haodop引入Delegation Token机制,当客户端和服务端进行Kerberos认证后,立刻申请Delegation Token,后续客户端使用Token向NameNode进行认证。并且,Token会通过Yarn框架传递到每个container中,这些container直接通过token认证NameNode,无需新建认证信息。

2. Delegation Token介绍

Yarn作业中的认证执行流程如下:

  1. 客户端通过kerberos认证方式访问namenode,认证结束后申请Delegation Token。
  2. 客户端提交作业,请求中携带token信息。
  3. resourcemanager启动contaienr时,携带token信息。
  4. container访问NameNode时,使用该token进行认证。
  5. Yarn通过定时线程向Hdfs请求更新token。
  6. KMS是存储密钥信息的,一般生产环境没有启动该服务,可以暂时不管。

Untitled.png

本文将介绍MapReduce作业请求NameNode的Delegation Token过程,并且介绍Resource Manager更新Delegation Token过程。注意,作业在提交时,还会申请resoureManager、historyServer等服务的Delegation Token,这些Delegation Token的处理过程和NameNode的Delegation Token处理过程一致,不再赘述。

3. MapReduce作业客户端请求NameNode创建Delegation Token

在执行MapReduce代码时,客户端会执行getSplits方法将输入数据切分成多个splits。切分前,会访问作业处理的所有HDFS文件信息:

public InputSplit[] getSplits(JobConf job, int numSplits)
    throws IOException {
    StopWatch sw = new StopWatch().start();    
    FileStatus[] stats = listStatus(job);
    //切分逻辑
  }

listStatus在访问HDFS文件前,会从NameNode中获取Delegation Token。在输入参数中,将job.getCredentials()表示当前作业对应的credentials,作业获取的delegation token会保存在这个对象中:

protected FileStatus[] listStatus(JobConf job) throws IOException {
    Path[] dirs = getInputPaths(job);
    if (dirs.length == 0) {
      throw new IOException("No input paths specified in job");
    }

    // get tokens for all the required FileSystems..
    //获取Delegation Token
    TokenCache.obtainTokensForNamenodes(job.getCredentials(), dirs, job);
    //访问所有HDFS文件
}

TokenCache.obtainTokensForNamenodesInternal获取MR作业中要访问的所有HDFS文件路径对应的NameNode,分别向不同的NameNode获取Delegation Token:

static void obtainTokensForNamenodesInternal(Credentials credentials,
      Path[] ps, Configuration conf) throws IOException {
    //获取MR访问HDFS文件对应的所有NameNode文件系统
    Set<FileSystem> fsSet = new HashSet<FileSystem>();
    for(Path p: ps) {
      fsSet.add(p.getFileSystem(conf));
    }
    //获取一个principal,标记为Delegation Token的renewer
    String masterPrincipal = Master.getMasterPrincipal(conf);
    for (FileSystem fs : fsSet) {
      //在每个集群中,都创建Delegation Token
      obtainTokensForNamenodesInternal(fs, credentials, conf, masterPrincipal);
    }
  }

getMasterPrincipal方法中,根据mapreduce.framework.name配置的值为yarn,选择使用yarn.resourcemanager.principal的值作为Delegation Token的renewer:

public static String getRmPrincipal(Configuration conf) throws IOException {
    //返回yarn.resourcemanager.principal对应的值
    String principal = conf.get(YarnConfiguration.RM_PRINCIPAL);
    String prepared = null;

    if (principal != null) {
      prepared = getRmPrincipal(principal, conf);
    }

    return prepared;
  }

如下,yarn.resourcemanager.principal配置一般为:

<property>
       <name>yarn.resourcemanager.principal</name>
       <value>hadoop/_HOST@NIE.NETEASE.COM</value>
    </property>

随后,执行DistributedFileSystem.collectDelegationTokens方法。其中issuer就是当前DistributedFileSystem对象,renewer就是yarn.resourcemanager.principal的值,credentials暂时为空,tokens为空。

如果当前作业中没有指定集群的token,开始申请namenode创建token:

static void collectDelegationTokens(
      final DelegationTokenIssuer issuer,
      final String renewer,
      final Credentials credentials,
      final List<Token<?>> tokens) throws IOException {
    //客户端访问的集群名
    final String serviceName = issuer.getCanonicalServiceName();
    // Collect token of the this issuer and then of its embedded children
    if (serviceName != null) {
      final Text service = new Text(serviceName);
      //获取当前作业中指定集群名称的token
      Token<?> token = credentials.getToken(service);
      if (token == null) {
        //如果当前作业中没有指定集群的token,开始申请namenode创建token
        token = issuer.getDelegationToken(renewer);
        if (token != null) {
          tokens.add(token);
          credentials.addToken(service, token);
        }
      }
    }
    //省略
  }

4. NameNode创建DelegationToken

NameNode服务端执行FSNamesystem.getDelegationToken方法创建token。

  1. 先通过effectiveUser、realUser、renewer构建DelegationTokenIdentifier,DelegationTokenIdentifier会作为key检索对应的token,它本身不包含token信息。
  2. 创建token。
  3. 将创建token的行为记录成edits,持久化到磁盘中。
      UserGroupInformation ugi = getRemoteUser();
      //获取effectiveUser
      String user = ugi.getUserName();
      Text owner = new Text(user);
      //获取realUser
      Text realUser = null;
      if (ugi.getRealUser() != null) {
        realUser = new Text(ugi.getRealUser().getUserName());
      }
      //构建DelegationTokenIdentifier
      DelegationTokenIdentifier dtId = new DelegationTokenIdentifier(owner,
        renewer, realUser);
      //创建DelegationToken
      token = new Token<DelegationTokenIdentifier>(
        dtId, dtSecretManager);
      long expiryTime = dtSecretManager.getTokenExpiryTime(dtId);
      //持久化edits
      getEditLog().logGetDelegationToken(dtId, expiryTime);

4.1 构建DelegationTokenIdentifier

首先,观察构建DelegationTokenIdentifier的过程。在创建token前,namenode创建DelegationTokenIdentifier,当需要请求token时,namenode就会通过DelegationTokenIdentifier获取token信息。

注意,DelegationTokenIdentifier需要effectiveUser和realUser。当作业提交后,ResourceManager没有保存客户端的ugi信息,因此ResourceManager中,客户端用户没有kerberos权限访问其他组件。为了使作业正常执行,ResourceManager增加了代理功能,它使用自己的TGT认证其他Hadoop组件,认证信息中,effectiveUser表示ResourceManager自己的principal代表的user,realUser则是提交作业的user。如下IpcConnectionContextProtos.proto协议中定义了这两个User:

message UserInformationProto {
  optional string effectiveUser = 1;
  optional string realUser = 2;
}

注意,DelegationTokenIdentifier除了包含effectiveUser、realUser、renewer信息,还包含发行时间和最大时间,以及ID信息:

  private Text owner;
  private Text renewer;
  private Text realUser;
  private long issueDate;
  private long maxDate;
  private int sequenceNumber;
  private int masterKeyId = 0;

4.2 创建Token

其次,NameNode根据DelegationTokenIdentifier构造token。它有以下重要成员:

  1. byte[] password:真正的token信息。
  2. identifier:用于检索token的key。
  3. kind:token类型。token有很多类型,例如RM_DELEGATION_TOKEN、HDFS_BLOCK_TOKEN、 TIMELINE_DELEGATION_TOKEN。此处NameNode创建的token类型是HDFS_DELEGATION_TOKEN。
public Token(T id, SecretManager<T> mgr) {
    password = mgr.createPassword(id);
    identifier = id.getBytes();
    kind = id.getKind();
    service = new Text();
  }

NameNode通过DelegationTokenSecretManager.createPassword方法创建密码,这就是token的具体内容。

protected synchronized byte[] createPassword(TokenIdent identifier) {
    int sequenceNum;
    long now = Time.now();
    //设置
    sequenceNum = incrementDelegationTokenSeqNum();
    identifier.setIssueDate(now);
    identifier.setMaxDate(now + tokenMaxLifetime);
    identifier.setMasterKeyId(currentKey.getKeyId());
    identifier.setSequenceNumber(sequenceNum);
    LOG.info("Creating password for identifier: " + formatTokenId(identifier)
        + ", currentKey: " + currentKey.getKeyId());
    //创建遗传字节数组当成密码
    byte[] password = createPassword(identifier.getBytes(), currentKey.getKey());
    //创建包含renewDate、密码、trackingId等信息的DelegationTokenInformation
    DelegationTokenInformation tokenInfo = new DelegationTokenInformation(now
        + tokenRenewInterval, password, getTrackingIdIfEnabled(identifier));
    try {
      //存储identifier和DelegationTokenInformation
      storeToken(identifier, tokenInfo);
    } catch (IOException ioe) {
      LOG.error("Could not store token " + formatTokenId(identifier) + "!!",
          ioe);
    }
    return password;
  }

storeToken方法就将token对象放到NameNode对象中作为缓存,放入到currentTokens这个Map中。注意:storeNewToken方法为空,不执行任何操作:

protected final Map<TokenIdent, DelegationTokenInformation> currentTokens 
      = new ConcurrentHashMap<>();

protected void storeToken(TokenIdent ident,
      DelegationTokenInformation tokenInfo) throws IOException {
    currentTokens.put(ident, tokenInfo);
    //NameNode中,storeNewToken方法为空,不执行
    storeNewToken(ident, tokenInfo.getRenewDate());
  }

另外,DelegationTokenSecretManager会启动ExpiredTokenRemover线程,定时清理过期token:

private void removeExpiredToken() throws IOException {
    long now = Time.now();
    Set<TokenIdent> expiredTokens = new HashSet<TokenIdent>();
    synchronized (this) {
      Iterator<Map.Entry<TokenIdent, DelegationTokenInformation>> i =
          currentTokens.entrySet().iterator();
      while (i.hasNext()) {
        Map.Entry<TokenIdent, DelegationTokenInformation> entry = i.next();
        long renewDate = entry.getValue().getRenewDate();
        if (renewDate < now) {
          expiredTokens.add(entry.getKey());
          i.remove();
        }
      }
    }

4.3 持久化Delegation Token

NameNode在创建完token后,会将DelegationTokenIdentifier写入到edits中:

getEditLog().logGetDelegationToken(dtId, expiryTime);

如下,调用FSEditLog.logGetDelegationToken方法,将DelegationTokenIdentifier保存在edits中:

void logGetDelegationToken(DelegationTokenIdentifier id,
      long expiryTime) {
    GetDelegationTokenOp op = GetDelegationTokenOp.getInstance(cache.get())
      .setDelegationTokenIdentifier(id)
      .setExpiryTime(expiryTime);
    logEdit(op);
  }

NamaNode不会将delegation token放入zk中,而是放入fsimage和edits中。在Standby NameNode加载了JournalNode的edits时,会初始化到fsimage文件中。Active NameNode从fsimage中恢复元数据时,会将DelegationTokenIdentifier信息通过createPassword方法恢复成为原有的tokens。 这就是NameNode中edits不保存token,只保存信息量更少的DelegationTokenIdentifier原因。

如下,NameNode和开始恢复元数据:

private void loadFSImage(StartupOption startOpt) throws IOException {
    final FSImage fsImage = getFSImage();
      //获取读取fsImage文件中的op操作,修改到内存中的fsimage中
      final boolean staleImage
          = fsImage.recoverTransitionRead(startOpt, this, recovery);
  }

经过一系列的方法调用,会调用FSEditLogLoader.applyEditLogOp方法应用fsimage文件中的每一个op操作,它调用addPersistedDelegationToken开始恢复delegation token:

private long applyEditLogOp(FSEditLogOp op, FSDirectory fsDir,
      StartupOption startOpt, int logVersion, long lastInodeId) throws IOException {
  case OP_GET_DELEGATION_TOKEN: {
      GetDelegationTokenOp getDelegationTokenOp
        = (GetDelegationTokenOp)op;

      fsNamesys.getDelegationTokenSecretManager()
        .addPersistedDelegationToken(getDelegationTokenOp.token,
                                     getDelegationTokenOp.expiryTime);
      break;
    }
}

可以看到,addPersistedDelegationToken方法根据DelegationTokenIdentifier直接产生token,放到currentTokens这个map中,这个token和持久化前的token一致。这样,就算NameNode切换,新NameNode启动时,delegation token依然存在,内容不变:

public synchronized void addPersistedDelegationToken(
      DelegationTokenIdentifier identifier, long expiryTime) throws IOException {
    if (running) {
      // a safety check
      throw new IOException(
          "Can't add persisted delegation token to a running SecretManager.");
    }
    int keyId = identifier.getMasterKeyId();
    DelegationKey dKey = allKeys.get(keyId);
    if (dKey == null) {
      LOG
          .warn("No KEY found for persisted identifier "
              + identifier.toString());
      return;
    }
    //直接创建token,放到内存中
    byte[] password = createPassword(identifier.getBytes(), dKey.getKey());
    if (identifier.getSequenceNumber() > this.delegationTokenSequenceNumber) {
      this.delegationTokenSequenceNumber = identifier.getSequenceNumber();
    }
    if (currentTokens.get(identifier) == null) {
      currentTokens.put(identifier, new DelegationTokenInformation(expiryTime,
          password, getTrackingIdIfEnabled(identifier)));
    } else {
      throw new IOException(
          "Same delegation token being added twice; invalid entry in fsimage or editlogs");
    }
  }

5. Yarn客户端将Delegation Token传输给ResourceMananger

客户端中,YARNRunner.submitJob提交作业,它会先将申请到的namenode delegation token放入到ApplicationSubmissionContext中,再提交作业:

public JobStatus submitJob(JobID jobId, String jobSubmitDir, Credentials ts)
  throws IOException, InterruptedException {
    
    addHistoryToken(ts);
    //将 delegation token放入到ApplicationSubmissionContext中,注意Credentials中保存的就是token信息
    ApplicationSubmissionContext appContext =
      createApplicationSubmissionContext(conf, jobSubmitDir, ts);

    // Submit to ResourceManager
    try {
      ApplicationId applicationId =
          resMgrDelegate.submitApplication(appContext);
    //省略
}

createApplicationSubmissionContext创建作业启动上下文,token信息已经放到了AM启动的上下文中:

public ApplicationSubmissionContext createApplicationSubmissionContext(
      Configuration jobConf, String jobSubmitDir, Credentials ts)
      throws IOException {
    ApplicationId applicationId = resMgrDelegate.getApplicationId();

    // Setup LocalResources
    Map<String, LocalResource> localResources =
        setupLocalResources(jobConf, jobSubmitDir);

    // Setup security tokens
    DataOutputBuffer dob = new DataOutputBuffer();
    ts.writeTokenStorageToStream(dob);
    //获取token信息
    ByteBuffer securityTokens =
        ByteBuffer.wrap(dob.getData(), 0, dob.getLength());
    //将token信息放入
    // Setup ContainerLaunchContext for AM container
    List<String> vargs = setupAMCommand(jobConf);
    //token放入到AM启动的上下文中
    ContainerLaunchContext amContainer = setupContainerLaunchContextForAM(
        jobConf, localResources, securityTokens, vargs);
    appContext.setAMContainerSpec(amContainer);         // AM Container
    //省略
}

6. ResourceManager启动container时携带delegation token信息

ResourceManager接收到客户端的tokens,调用AMLauncher.launch准备启动AppMaster的container,将客户端的tokens信息放到NodeMananger的请求中,这样NodeManager就有tokens信息,可以根据该tokens信息访问HDFS了:

private void launch() throws IOException, YarnException {
    connect();
    ContainerId masterContainerID = masterContainer.getId();
    //获取客户端提交时的Context,里面包含tokens
    ApplicationSubmissionContext applicationContext =
        application.getSubmissionContext();
    LOG.info("Setting up container " + masterContainer
        + " for AM " + application.getAppAttemptId());
    //创建AM启动相关Context,里面携带tokens
    ContainerLaunchContext launchContext =
        createAMContainerLaunchContext(applicationContext, masterContainerID);
    //将Context发送给NodeMananger,准备启动container
    StartContainersResponse response =
        containerMgrProxy.startContainers(allRequests);
    if (response.getFailedRequests() != null
        && response.getFailedRequests().containsKey(masterContainerID)) {
      Throwable t =
          response.getFailedRequests().get(masterContainerID).deSerialize();
      parseAndThrowException(t);
    } else {
      LOG.info("Done launching container " + masterContainer + " for AM "
          + application.getAppAttemptId());
    }
  }

7. ResourceManager定时更新tokens

每次RM接收到handleAppSubmitEvent请求后,都会执行DelegationTokenRenewer.handleAppSubmitEvent方法开始更新delegation token。如下所示,DelegationTokenRenewer保存app→DelegationTokenToRenew映射,也保存token→DelegationTokenToRenew的映射。

handleAppSubmitEvent为每个token创建DelegationTokenToRenew对象,并为每个token设置定时任务更新token:

private ConcurrentMap<ApplicationId, Set<DelegationTokenToRenew>> appTokens =
      new ConcurrentHashMap<ApplicationId, Set<DelegationTokenToRenew>>();

  private ConcurrentMap<Token<?>, DelegationTokenToRenew> allTokens =
      new ConcurrentHashMap<Token<?>, DelegationTokenToRenew>();

private void handleAppSubmitEvent(AbstractDelegationTokenRenewerAppEvent evt)
      throws IOException, InterruptedException {
    ApplicationId applicationId = evt.getApplicationId();
    //获取客户端传送过来的tokens
    Credentials ts = evt.getCredentials();
    //遍历客户端传递过来的所有token
    for (Token<?> token : tokens) {
        //如果客户端
        DelegationTokenToRenew dttr = allTokens.get(token);
        if (dttr == null) {
          //如果不存在DelegationTokenToRenew对象,为该token创建一个对象
          dttr = new DelegationTokenToRenew(Arrays.asList(applicationId), token,
              tokenConf, now, shouldCancelAtEnd, evt.getUser());
          try {
            //尝试更新一次token,并更新token下次的过期时间。由于这是token的第一个DelegationTokenToRenew,直接更新即可,不需要剔除旧的token。
            renewToken(dttr);
          //省略
      for (DelegationTokenToRenew dtr : tokenList) {
        //将DelegationTokenToRenew放入allTokens这个Map中
        //可见allTokens用于记录token有没有启动定时任务
        DelegationTokenToRenew currentDtr =
            allTokens.putIfAbsent(dtr.token, dtr);
          //设置定时任务启动token更新流程
          setTimerForTokenRenewal(dtr);
        }
      }
    }
  }

setTimerForTokenRenewal启动线程定时更新token:

protected void setTimerForTokenRenewal(DelegationTokenToRenew token)
      throws IOException {
    // calculate timer time
    long expiresIn = token.expirationDate - System.currentTimeMillis();
    if (expiresIn <= 0) {
      LOG.info("Will not renew token " + token);
      return;
    }
    long renewIn = token.expirationDate - expiresIn/10; // little bit before the expiration
    // need to create new task every time
    RenewalTimerTask tTask = new RenewalTimerTask(token);
    token.setTimerTask(tTask); // keep reference to the timer

    renewalTimer.schedule(token.timerTask, new Date(renewIn));
    LOG.info("Renew " + token + " in " + expiresIn + " ms, appId = "
        + token.referringAppIds);
  }

9. RM无法更新长期运行的container的delegation token问题

在ResourceManager中,会为每个app创建DelegationTokenToRenew,并启动定时更新任务。这样,一旦要启动新的container,就会将最新的delegation token发送给container。这里有一个漏洞,token有效期为7天,更新时间为1天,如果container运行了7天以上,那么token过期,这时resourcemanager不会将最新的delegation token传给container,因此Yarn作业会失败。

对于Mapreduce批处理作业,每个作业一般每天定时执行一次,一天内就可以完成批处理作业。但是对于spark streaming或者flink这种长时间运行的应用而言,它们的应用程序会运行超过7天。

Spark和Flink为了解决这个问题,将keytab上传到HDFS中,Spark AppMaster定期通过keytab进行认证获取ticket,再访问NameNode获取token,再将token持久化写到HDFS中,container再从HDFS下载token文件即可完成token更新。

首先,spark提交作业时,设置了两个参数"--keytab"和"--principal”,将keytab放在HDFS中,并设置AppMaster的环境,其中,spark.yarn.credentials.file表示要存储token的位置:

private def setupLaunchEnv(
      stagingDir: String,
      pySparkArchives: Seq[String]): HashMap[String, String] = {
    ...
    if (loginFromKeytab) {
      val remoteFs = FileSystem.get(hadoopConf)
      val stagingDirPath = new Path(remoteFs.getHomeDirectory, stagingDir)
      val credentialsFile = "credentials-" + UUID.randomUUID().toString
      sparkConf.set(
        "spark.yarn.credentials.file", new Path(stagingDirPath, credentialsFile).toString)
      logInfo(s"Credentials file set to: $credentialsFile")
      val renewalInterval = getTokenRenewalInterval(stagingDirPath)
      sparkConf.set("spark.yarn.token.renewal.interval", renewalInterval.toString)
    }

    ...
}

在AppMaster执行过程中,定期重新认证kerberos,并重新刷新token,将token上传到spark.yarn.credentials.file路径中。在writeNewTokensToHDFS方法中进行:

private def writeNewTokensToHDFS(principal: String, keytab: String): Unit = {
    logInfo(s"Attempting to login to KDC using principal: $principal")
    //1)重新登录kdc
    val keytabLoggedInUGI = UserGroupInformation.loginUserFromKeytabAndReturnUGI(principal, keytab)
    logInfo("Successfully logged into KDC.")
    val tempCreds = keytabLoggedInUGI.getCredentials
    val credentialsPath = new Path(credentialsFile)
    val dst = credentialsPath.getParent
    //2)使用新的登录身份信息向namenode拿hdfs delegation token,并添加到tempCreds中
    keytabLoggedInUGI.doAs(new PrivilegedExceptionAction[Void] {
      // Get a copy of the credentials
      override def run(): Void = {
        val nns = YarnSparkHadoopUtil.get.getNameNodesToAccess(sparkConf) + dst
        hadoopUtil.obtainTokensForNamenodes(nns, freshHadoopConf, tempCreds)
        null
      }
    })
    // Add the temp credentials back to the original ones.
   //3)将新获取的token添加到当前登录用户中
    UserGroupInformation.getCurrentUser.addCredentials(tempCreds)
    val remoteFs = FileSystem.get(freshHadoopConf)
    // If lastCredentialsFileSuffix is 0, then the AM is either started or restarted. If the AM
    // was restarted, then the lastCredentialsFileSuffix might be > 0, so find the newest file
    // and update the lastCredentialsFileSuffix.
    if (lastCredentialsFileSuffix == 0) {
      hadoopUtil.listFilesSorted(
        remoteFs, credentialsPath.getParent,
        credentialsPath.getName, SparkHadoopUtil.SPARK_YARN_CREDS_TEMP_EXTENSION)
        .lastOption.foreach { status =>
        lastCredentialsFileSuffix = hadoopUtil.getSuffixForCredentialsPath(status.getPath)
      }
    }
    val nextSuffix = lastCredentialsFileSuffix + 1
    val tokenPathStr =
      credentialsFile + SparkHadoopUtil.SPARK_YARN_CREDS_COUNTER_DELIM + nextSuffix
    val tokenPath = new Path(tokenPathStr)
    val tempTokenPath = new Path(tokenPathStr + SparkHadoopUtil.SPARK_YARN_CREDS_TEMP_EXTENSION)
    logInfo("Writing out delegation tokens to " + tempTokenPath.toString)
    val credentials = UserGroupInformation.getCurrentUser.getCredentials
    //4)将credentials信息写到目标文件中
    credentials.writeTokenStorageFile(tempTokenPath, freshHadoopConf)
    logInfo(s"Delegation Tokens written out successfully. Renaming file to $tokenPathStr")
    remoteFs.rename(tempTokenPath, tokenPath)
    logInfo("Delegation token file rename complete.")
    lastCredentialsFileSuffix = nextSuffix
  }

Executor定期从spark.yarn.credentials.file路径中获取token文件:

try {
      val credentialsFilePath = new Path(credentialsFile)
      val remoteFs = FileSystem.get(freshHadoopConf)
      SparkHadoopUtil.get.listFilesSorted(
        remoteFs, credentialsFilePath.getParent,
        credentialsFilePath.getName, SparkHadoopUtil.SPARK_YARN_CREDS_TEMP_EXTENSION)
        .lastOption.foreach { credentialsStatus =>
        val suffix = SparkHadoopUtil.get.getSuffixForCredentialsPath(credentialsStatus.getPath)
        if (suffix > lastCredentialsFileSuffix) {
          logInfo("Reading new delegation tokens from " + credentialsStatus.getPath)
          val newCredentials = getCredentialsFromHDFSFile(remoteFs, credentialsStatus.getPath)
          lastCredentialsFileSuffix = suffix
          UserGroupInformation.getCurrentUser.addCredentials(newCredentials)
          logInfo("Tokens updated from credentials file.")
        } else {
          // Check every hour to see if new credentials arrived.
          logInfo("Updated delegation tokens were expected, but the driver has not updated the " +
            "tokens yet, will check again in an hour.")
          delegationTokenRenewer.schedule(executorUpdaterRunnable, 1, TimeUnit.HOURS)
          return
        }
      }
      val timeFromNowToRenewal =
        SparkHadoopUtil.get.getTimeFromNowToRenewal(
          sparkConf, 0.8, UserGroupInformation.getCurrentUser.getCredentials)
      if (timeFromNowToRenewal <= 0) {
        // We just checked for new credentials but none were there, wait a minute and retry.
        // This handles the shutdown case where the staging directory may have been removed(see
        // SPARK-12316 for more details).
        delegationTokenRenewer.schedule(executorUpdaterRunnable, 1, TimeUnit.MINUTES)
      } else {
        logInfo(s"Scheduling token refresh from HDFS in $timeFromNowToRenewal millis.")
        delegationTokenRenewer.schedule(
          executorUpdaterRunnable, timeFromNowToRenewal, TimeUnit.MILLISECONDS)
      }
    } catch {
      // Since the file may get deleted while we are reading it, catch the Exception and come
      // back in an hour to try again
      case NonFatal(e) =>
        logWarning("Error while trying to update credentials, will try again in 1 hour", e)
        delegationTokenRenewer.schedule(executorUpdaterRunnable, 1, TimeUnit.HOURS)
    }

注意:如果AppMaster中没有创建app对应的DelegationTokenToRenew,就有RM创建并定时启动token更新逻辑;如果AppMaster创建app对应的DelegationTokenToRenew了,那么RM就不会创建并定时更新token。

10. Delegation Token认证框架

通过https://blog.51cto.com/u_15327484/8153877文章可以看到,Hadoop在认证时,使用SASL框架。当申请好token后,就会使用DIGEST-MD5认证方式进行认证:

public enum AuthMethod {
    SIMPLE((byte) 80, ""),
    KERBEROS((byte) 81, "GSSAPI"),
    @Deprecated
    DIGEST((byte) 82, "DIGEST-MD5"),
    TOKEN((byte) 82, "DIGEST-MD5"),
    PLAIN((byte) 83, "PLAIN");
}

在执行rpc请求前,会执行SaslClient.evaluateChallenge进行认证。此时它使用的是DigestMD5Client.evaluateChallenge进行认证:

Untitled 1.png

改认证方式不会再访问TGS获取service ticket,直接向服务端请求认证,详细过程省略。

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

推荐阅读
GQ7psP7UJw7k
最新推荐 更多

2024-05-03