美文网首页
shardingsphere sql解析

shardingsphere sql解析

作者: 甜甜起司猫_ | 来源:发表于2021-09-01 01:50 被阅读0次

    shardingsphere sql解析

    过程

    1. 调用ShardingSphereStatement的createExecutionContext方法,生成LogicSQL
    2. 构造一个ShardingSphereSQLParserEngine,构造方法中给用工厂方法生成一个sqlStatementParserEngine,构造一个distSQLStatementParserEngine
    3. 调用ShardingSphereSQLParserEngine的parse0方法,实际调用SQLStatementParserEngine的parse方法
    4. 调用SQLParserExecutor的twoPhaseParse方法
    5. 使用antlr包中的Parser去解析,这里用的是MYSQL数据库,调用实现类MYSQLParser

    方法解析

    执行sql

    CREATE TABLE IF NOT EXISTS t_order (order_id BIGINT NOT NULL AUTO_INCREMENT, user_id INT NOT NULL, address_id BIGINT NOT NULL, status VARCHAR(50), PRIMARY KEY (order_id))
    
        @Override
        public ResultSet executeQuery(final String sql) throws SQLException {
            if (Strings.isNullOrEmpty(sql)) {
                throw new SQLException(SQLExceptionConstant.SQL_STRING_NULL_OR_EMPTY);
            }
            ResultSet result;
            try {
                executionContext = createExecutionContext(sql);
                List<QueryResult> queryResults = executeQuery0();
                MergedResult mergedResult = mergeQuery(queryResults);
                result = new ShardingSphereResultSet(getResultSetsForShardingSphereResultSet(), mergedResult, this, executionContext);
            } finally {
                currentResultSet = null;
            }
            currentResultSet = result;
            return result;
        }
    
        private ExecutionContext createExecutionContext(final String sql) throws SQLException {
            clearStatements();
            LogicSQL logicSQL = createLogicSQL(sql);
            SQLCheckEngine.check(logicSQL.getSqlStatementContext().getSqlStatement(), logicSQL.getParameters(), 
                    metaDataContexts.getMetaData(connection.getSchemaName()).getRuleMetaData().getRules(), connection.getSchemaName(), metaDataContexts.getMetaDataMap(), null);
            return kernelProcessor.generateExecutionContext(logicSQL, metaDataContexts.getMetaData(connection.getSchemaName()), metaDataContexts.getProps());
        }
    
        private LogicSQL createLogicSQL(final String sql) {
            ShardingSphereSQLParserEngine sqlParserEngine = new ShardingSphereSQLParserEngine(
                    DatabaseTypeRegistry.getTrunkDatabaseTypeName(metaDataContexts.getMetaData(connection.getSchemaName()).getResource().getDatabaseType()));
            SQLStatement sqlStatement = sqlParserEngine.parse(sql, false);
            SQLStatementContext<?> sqlStatementContext = SQLStatementContextFactory.newInstance(metaDataContexts.getMetaDataMap(), Collections.emptyList(), sqlStatement,
                    connection.getSchemaName());
            return new LogicSQL(sqlStatementContext, sql, Collections.emptyList());
        }
    

    对执行sql进行解析,生成LogicSQL,以便后续

    1. 生成RouteContext
    2. 生成ExecutionContext
    3. 执行sql日志打印
    
    public final class ShardingSphereSQLParserEngine {
        
        private final SQLStatementParserEngine sqlStatementParserEngine;
        
        private final DistSQLStatementParserEngine distSQLStatementParserEngine;
        
        public ShardingSphereSQLParserEngine(final String databaseTypeName) {
            sqlStatementParserEngine = SQLStatementParserEngineFactory.getSQLStatementParserEngine(databaseTypeName);
            distSQLStatementParserEngine = new DistSQLStatementParserEngine();
        }
    
        private SQLStatement parse0(final String sql, final boolean useCache) {
            try {
                return sqlStatementParserEngine.parse(sql, useCache);
            } catch (final SQLParsingException | ParseCancellationException originalEx) {
                try {
                    return distSQLStatementParserEngine.parse(sql);
                } catch (final SQLParsingException ignored) {
                    throw originalEx;
                }
            }
        }
    }
    

    优先使用SQLStatementParserEngine去执行,出现异常再改用DistSQLStatementParserEngine去执行

    DistSQLStatementParserEngine是干嘛的?

    public final class SQLStatementParserEngine {
        
        private final SQLStatementParserExecutor sqlStatementParserExecutor;
        
        private final LoadingCache<String, SQLStatement> sqlStatementCache;//guava缓存
        
        public SQLStatementParserEngine(final String databaseType) {
            sqlStatementParserExecutor = new SQLStatementParserExecutor(databaseType);
            // TODO use props to configure cache option
            sqlStatementCache = SQLStatementCacheBuilder.build(new CacheOption(2000, 65535L, 4), databaseType);
        }
        
        /**
         * Parse to SQL statement.
         *
         * @param sql SQL to be parsed
         * @param useCache whether use cache
         * @return SQL statement
         */
        public SQLStatement parse(final String sql, final boolean useCache) {
            return useCache ? sqlStatementCache.getUnchecked(sql) : sqlStatementParserExecutor.parse(sql);
        }
    }
    

    使用了guava缓存已解析过的结果

        public SQLStatement parse(final String sql) {
            return visitorEngine.visit(parserEngine.parse(sql, false));
        }
    
        private static <T> ParseTreeVisitor<T> createParseTreeVisitor(final SQLVisitorFacade visitorFacade, final SQLStatementType type, final Properties props) {
            switch (type) {
                case DML:
                    return (ParseTreeVisitor) visitorFacade.getDMLVisitorClass().getConstructor(Properties.class).newInstance(props);
                case DDL:
                    return (ParseTreeVisitor) visitorFacade.getDDLVisitorClass().getConstructor(Properties.class).newInstance(props);
                case TCL:
                    return (ParseTreeVisitor) visitorFacade.getTCLVisitorClass().getConstructor(Properties.class).newInstance(props);
                case DCL:
                    return (ParseTreeVisitor) visitorFacade.getDCLVisitorClass().getConstructor(Properties.class).newInstance(props);
                case DAL:
                    return (ParseTreeVisitor) visitorFacade.getDALVisitorClass().getConstructor(Properties.class).newInstance(props);
                case RL:
                    return (ParseTreeVisitor) visitorFacade.getRLVisitorClass().getConstructor(Properties.class).newInstance(props);
                default:
                    throw new SQLParsingException("Can not support SQL statement type: `%s`", type);
            }
        }
    
        private ParseASTNode twoPhaseParse(final String sql) {
            DatabaseTypedSQLParserFacade sqlParserFacade = DatabaseTypedSQLParserFacadeRegistry.getFacade(databaseType);
            SQLParser sqlParser = SQLParserFactory.newInstance(sql, sqlParserFacade.getLexerClass(), sqlParserFacade.getParserClass());
            try {
                ((Parser) sqlParser).getInterpreter().setPredictionMode(PredictionMode.SLL);
                return (ParseASTNode) sqlParser.parse();
            } catch (final ParseCancellationException ex) {
                ((Parser) sqlParser).reset();
                ((Parser) sqlParser).getInterpreter().setPredictionMode(PredictionMode.LL);
                try {
                    return (ParseASTNode) sqlParser.parse();
                } catch (final ParseCancellationException e) {
                    throw new SQLParsingException("You have an error in your SQL syntax");
                }
            }
        }
    
    1. 根据antrl包中的Parser类解析出来的ParseASTNode结构,得到执行sql的类型(这里执行的是建表SQL,所以是DDL类型)
    2. 根据sql类型,选择SQLVisitorFacade解析策略
    3. 使用SQLVisitorFacade解析策略,将解析出来的ParseASTNode转换为SQLStatement(这里使用MYSQL数据库,执行建表sql,所以生成的是MYSQLCreatedTableStatement)
    4. 根据SQLStatement类型,生成相应类型的SQLStatementContext

    总结

    1. sql解析结果使用了guava本地缓存
    2. 解析过程中使用了antrl工具去解析

    相关文章

      网友评论

          本文标题:shardingsphere sql解析

          本文链接:https://www.haomeiwen.com/subject/ltatwltx.html