本文不介绍细节,只是将实际应用中的日志配置,在这里做个记录,方便自己大家学习借鉴
服务中的日志配置一直都是开发中比较重要的部分。先看下我认为的日志分类:
-
access访问日志:
调用接口的时候,对于每一个调用都记录一下访问日志。方便后面的请求追踪 - 常规日志:代码中的日志,就是我们在代码中打印的各种日志
-
性能日志:性能日志是为了方便优化代码,监控性能的日志。在我所在的公司记录了三种有关于性能的日志
-
controller性能日志
记录在Controller
层次的放温暖耗时信息 -
service性能日志
记录调用Service
方法中的访问耗时信息 -
dao性能日志
记录dao层
访问数据库的耗时信息
-
下面介绍两种实现日志记录的方式:
- 第一种:
Spring Sleuth
+Logback
- 借助于
Spring Cloud
组件Spring sleuth
实现分布式服务追踪,使用 Logback 定制日志格式, - 这种方式这里主要是配置了 常规日志 与 access日志。性能日志我不知道能不能配置,不知道能实现不?这里就不说了
- 借助于
- 第二种:
AOP
+Inteceptor
+Logback
- 使用拦截器拦截请求,打印
access日志
, 目的是为了生成能够追踪常规日志的traceId
- 使用 AOP 监控
controller
、service
、dao
层中的方法,提取性能日志需要的时间内信息
- 使用拦截器拦截请求,打印
1. Spring Sleuth + Logback 实现
Spring Sleuth
是 Spring Cloud
提供的分布式服务追踪框架,使用这个框架可以很方便的打印日志。通过 traceId
与 spanId
还可以在分布式服务中方便追踪请求.
Spring boot 版本:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.1.RELEASE</version>
<relativePath/>
</parent>
依赖 jar 包:
<dependency>
<groupId>net.rakugakibox.spring.boot</groupId>
<artifactId>logback-access-spring-boot-starter</artifactId>
<version>2.7.1</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
<version>2.0.0.RELEASE</version>
</dependency>
logback 配置:

环境变量配置:
SpringBoot
自动映射了一些值到 Logback
。如:
logging.path --> LOG_PATH
logging.file --> LOG_FILE
spring.application.name=tp-forum-api
# Log
logging.path: ${LOG_PATH:${LOG_TEMP:logs}}
logging.file=${logging.path}/app.log
- logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!--添加配置文件-->
<include resource="org/springframework/boot/logging/logback/defaults.xml" />
<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<!--获取 spring 配置中的 key 为 spring.application.name 的 value 值-->
<springProperty scope="context" name="applicationName" source="spring.application.name"/>
<!--property 用于定义变量。log 在日志中输出位置-->
<property name="LOG_FILE" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/>
<!--appender打印配置-->
<!--RollingFileAppender:每天输出到一个新的日志文件-->
<!--TimeBasedRollingPolicy:根据时间进行滚动。设置日志总大小为1G,时间只保存30天内的日志-->
<appender name="APP-LOG" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history capped at 3GB total size -->
<maxHistory>30</maxHistory>
<totalSizeCap>1GB</totalSizeCap>
</rollingPolicy>
<encoder>
<!--date | level | application name, traceId, SpanId, SpanExport | thread | logger | %F.%M:%L |msg -->
<!-- :- 表示前面不存在,就显示 - -->
<pattern>[%date{ISO8601}]|%-5level|[${applicationName},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-}%X{X-Span-Export:-}]|[%thread]|%logger{0}|%F.%M:%L|%msg%n</pattern>
</encoder>
</appender>
<!--日志打印 logger-->
<logger name="org.springframework.web" level="INFO"/>
<!--根 logger,没有指定logger时的默认值-->
<root level="INFO">
<appender-ref ref="APP-LOG" />
</root>
</configuration>
按照下面的配置,常规日志打印出来的格式应该如下:
对应的格式是:
data | level | application name, traceId, SpanId, SpanExport | thread | logger | %F.%M:%L |msg
[2019-01-15 19:44:44,608]|WARN |[tp-forum-api,903df707120b2927,903df707120b2927,false]|[http-nio-8080-exec-1]|HelloWrold|HelloWrold.java.hello:21|This is a warn message
[2019-01-15 19:44:44,609]|INFO |[tp-forum-api,903df707120b2927,903df707120b2927,false]|[http-nio-8080-exec-1]|HelloWrold|HelloWrold.java.hello:22|This is a info message
[2019-01-15 19:44:44,609]|ERROR|[tp-forum-api,903df707120b2927,903df707120b2927,false]|[http-nio-8080-exec-1]|HelloWrold|HelloWrold.java.hello:23|This is a error message
logback-access-spring.xml
<configuration>
<!-- always a good activate OnConsoleStatusListener -->
<!-- 监听Logback状态变化,立即输出到控制台 -->
<statusListener class="ch.qos.logback.core.status.OnConsoleStatusListener" />
<!-- properties 配置 -->
<springProperty scope="context" name="applicationName" source="spring.application.name"/>
<!--配置的目录-->
<property name="ACCESS_LOG_FILE" value="${ACCESS_LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/access.log}"/>
<!--appender规则-->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${ACCESS_LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${ACCESS_LOG_FILE}.%d{yyyy-MM-dd}.log.zip</fileNamePattern>
</rollingPolicy>
<encoder>
<pattern>[%t{ISO8601}]|[${applicationName},%i{X-B3-TraceId},%i{X-B3-SpanId:-},%i{X-Span-Export:-}]|%A|%h|%i{X-Real-IP}|%u|"%r"|%s|%b|%D|"%i{Referer}"|"%i{User-Agent}"</pattern>
</encoder>
</appender>
<appender-ref ref="FILE" />
</configuration>
下面是打印的情况:
可以看出 TraceId
与 SpanId
没有打印出来,猜测是因为access日志打印的时候,还没有生成该Id。因此没有好的方法让access
日志与常规代码日志对应起来,所以后来没有采用该方法。
[2019-01-15 19:44:44,640]|[tp-forum-api,-,-,-]|10.2.202.203|0:0:0:0:0:0:0:1|-|-|"GET /hello HTTP/1.1"|200|5|96|"-"|"PostmanRuntime/7.4.0"
AOP + Inteceptor + Logback
- 环境配置
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.1.RELEASE</version>
<relativePath />
</parent>
- 依赖
jar
包
<dependency>
<groupId>org.perf4j</groupId>
<artifactId>perf4j</artifactId>
<version>0.9.16</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
<version>2.1.1.RELEASE</version>
</dependency>
- logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!--常量。定义文件的输出位置-->
<property name="log.dir" value="logs"/>
<!--日志最大的历史 30 天-->
<property name="maxHistory" value="30"/>
<!--添加配置文件-->
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<!--添加Logger范围内的全局变量-->
<springProperty scope="context" name="applicationName" source="spring.application.name"/>
<!--文件存放的位置。':' 代表前面不存在就取后面的-->
<!--LOG_FILE 对应配置文件中 logging.file LOG_PATH 对应 logging.path -->
<property name="LOG_FILE"
value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/spring.log}"/>
<!-- 默认定义,输出到文件 LOG_FILE -->
<appender name="app" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history capped at 3GB total size -->
<maxHistory>30</maxHistory>
<totalSizeCap>1GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>
[%d{yyyy-MM-dd-HH:mm:ss}]|%-5level|%X{X-B3-TraceId:-}|[%thread]|%logger{36}.%M:%L| %msg%n
</pattern>
</encoder>
</appender>
<!-- 输出到 Console 控制台 -->
<appender name="STDOUT_CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<Target>System.out</Target>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss} %-40.40c [%5.5thread] %-5p - %m%n</pattern>
</encoder>
</appender>
<!--定义 access 日志的地址-->
<property name="ACCESS_LOG_FILE"
value="${ACCESS_LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}}/access/access.log}"/>
<!-- 定义 access 访问日志,输出到 ACCESS_LOG_FILE -->
<appender name="accesslog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${ACCESS_LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${ACCESS_LOG_FILE}.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>30</maxHistory>
<totalSizeCap>1GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>
%d{yyyy-MM-dd-HH:mm:ss} %m %n
</pattern>
</encoder>
</appender>
<!-- service 性能日志 -->
<appender name="statFileAppender_service"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${log.dir}/perf_service/perfStats.log.%d{yyyy-MM-dd}.log
</FileNamePattern>
<maxHistory>${maxHistory}</maxHistory>
</rollingPolicy>
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>
%d{yyyy-MM-dd HH:mm:ss} %-5level %logger{36} - %msg%n
</Pattern>
</layout>
</appender>
<!-- controller 性能日志 -->
<appender name="statFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${log.dir}/perf/perfStats.log.%d{yyyy-MM-dd}.log</FileNamePattern>
<maxHistory>${maxHistory}</maxHistory>
</rollingPolicy>
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>
%d{yyyy-MM-dd HH:mm:ss} %-5level %logger{36} - %msg%n
</Pattern>
</layout>
</appender>
<!--DAO 性能日志-->
<appender name="statFileAppender_dao"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${log.dir}/perf_dao/perfStats.log.%d{yyyy-MM-dd}.log
</FileNamePattern>
<maxHistory>${maxHistory}</maxHistory>
</rollingPolicy>
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>
%d{yyyy-MM-dd HH:mm:ss} %-5level %logger{36} - %msg%n
</Pattern>
</layout>
</appender>
<!--配置周期汇总-->
<!--service-->
<appender name="CoalescingStatistics_Service"
class="org.perf4j.logback.AsyncCoalescingStatisticsAppender">
<!-- TimeSlice配置多少时间间隔去做一次汇总写入文件中 -->
<timeSlice>10000</timeSlice>
<appender-ref ref="statFileAppender_service"/>
</appender>
<!--Controller-->
<appender name="CoalescingStatistics"
class="org.perf4j.logback.AsyncCoalescingStatisticsAppender">
<!-- TimeSlice配置多少时间间隔去做一次汇总写入文件中 默认值是 30000 ms -->
<timeSlice>10000</timeSlice>
<appender-ref ref="statFileAppender"/>
</appender>
<!--DAO-->
<appender name="CoalescingStatistics_dao"
class="org.perf4j.logback.AsyncCoalescingStatisticsAppender">
<!-- TimeSlice配置多少时间间隔去做一次汇总写入文件中 默认值是 30000 ms -->
<timeSlice>10000</timeSlice>
<appender-ref ref="statFileAppender_dao"/>
</appender>
<!-- access performance logger -->
<logger name="access_log" additivity="false">
<level value="info"/>
<appender-ref ref="accesslog"/>
</logger>
<!-- Service performance logger -->
<logger name="service.perf.logger" level="info" additivity="false">
<appender-ref ref="CoalescingStatistics_Service"/>
</logger>
<!-- Controller performance logger -->
<logger name="org.perf4j.TimingLogger" level="info" additivity="false">
<appender-ref ref="CoalescingStatistics"/>
</logger>
<!-- dao performance log -->
<logger name="dao.perf.logger" level="info" additivity="false">
<appender-ref ref="CoalescingStatistics_dao"/>
</logger>
<!-- org.springframework包下日志-->
<logger name="org.springframework" level="info" additivity="true">
<appender-ref ref="app"/>
<appender-ref ref="STDOUT_CONSOLE"/>
</logger>
<!-- 根 logger。默认 logger -->
<root level="info">
<appender-ref ref="STDOUT_CONSOLE"/>
<appender-ref ref="app"/>
</root>
<!-- org.springframework.web 包下日志-->
<logger name="org.springframework.web" level="info"/>
</configuration>
完成上面的配置,应该是只能打印出常规日志。性能日志与access
日志还不能打印出来。
为了打印出性能,日志需要结合上面的配置 logger
与 监听文件的AOP切面
,来提取响应时间.
- 配置性能日志的切面:
service
的性能日志切面:
@Aspect
@Component
public class Perf4jLogAspectService {
//与 logback-spring.xml 中的 name 保持一致
private static final String STOP_WATCH_LOGGER_NAME = "service.perf.logger";
private static final Log STOP_WATCH_LOGGER = LogFactory.getLog(STOP_WATCH_LOGGER_NAME);
//环绕监听 service文件下的方法调用
@Around("execution(public * com.renren.truckerpathapi.service..*.*(..))")
public Object logPer4jLogs(ProceedingJoinPoint point) throws Throwable {
String className = point.getSignature().getDeclaringTypeName();
String methodName = point.getSignature().getName();
String name = className + "." + methodName;
StopWatch stopwatch = new CommonsLogStopWatch(STOP_WATCH_LOGGER);
final Object returnValue = point.proceed();
stopwatch.stop(name);
return returnValue;
}
}
controller
、dao
层的类似:
@Aspect
@Component
public class Perf4jLogAspectController {
private static final String STOP_WATCH_LOGGER_NAME = "org.perf4j.TimingLogger";
private static final Log STOP_WATCH_LOGGER = LogFactory.getLog(STOP_WATCH_LOGGER_NAME);
@Around("execution(public * com.renren.truckerpathapi.controller..*.*(..))")
public Object logPer4jLogs(ProceedingJoinPoint point) throws Throwable {
String className = point.getSignature().getDeclaringTypeName();
String methodName = point.getSignature().getName();
String name = className + "." + methodName;
StopWatch stopwatch = new CommonsLogStopWatch(STOP_WATCH_LOGGER);
final Object returnValue = point.proceed();
stopwatch.stop(name);
return returnValue;
}
}
@Component
@Aspect
public class Perf4jLogAspectDao {
private static final String STOP_WATCH_LOGGER_NAME = "dao.perf.logger";
private static final Log STOP_WATCH_LOGGER = LogFactory.getLog(STOP_WATCH_LOGGER_NAME);
@Around("execution(public * com.renren.truckerpathapi.dao..*.*(..))")
public Object logPer4jLogs(ProceedingJoinPoint point) throws Throwable {
String className = point.getSignature().getDeclaringTypeName();
String methodName = point.getSignature().getName();
String name = className + "." + methodName;
StopWatch stopwatch = new CommonsLogStopWatch(STOP_WATCH_LOGGER);
final Object returnValue = point.proceed();
stopwatch.stop(name);
return returnValue;
}
}
对于access日志
,还记得上面所说的问题吗?Spring Sleuth
在 access
中不能生成 traceId
的问题。
这里采用的处理方式,是自己添加一个请求拦截器,在请求返回时进行拦截(无论成功失败,请求都是会返回的),调用 logback-spring.xm
l 中配置的 logger
进行打印。实现如下:
AccessLogInterceptor.java
@Component
public class AccessLogInterceptor implements HandlerInterceptor {
@Autowired
private GetUserByTokenService getUserByTokenService;
private static final Log LOGGER = LogFactory.getLog(AccessLogInterceptor.class);
private static final Log ACCESS_LOG = LogFactory.getLog("access_log");
private static final String SEPARATOR = "|";
private static final String EMPTY_STR = "";
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response,
Object handler) {
return true;
}
@Override
public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) {
}
@Override
public void afterCompletion(HttpServletRequest request, HttpServletResponse response,
Object handler, Exception ex) {
//domain & uri & refer
String domain = request.getServerName();
String uri = request.getRequestURI();
String refer = request.getHeader("Referer");
final String token = request.getHeader("x-auth-token");
final String installation_ID = request.getHeader("Installation-ID");
final Users loginUser = getUserByTokenService.getUserByToken(token);
//traceId
String traceId = request.getHeader("X-B3-TraceId");
if (StringUtils.isEmpty(traceId)) {
traceId = UUID.randomUUID().toString();
}
MDC.put("X-B3-TraceId", traceId);
String remoteIp = getRemoteAddr(request);
String userId = loginUser == null ? installation_ID : String.valueOf(loginUser.getId());
String userName = loginUser == null ? "" :
loginUser.getFirstName() + " " + loginUser.getLastName();
String userAgent = request.getHeader("User-Agent");
Map<String, String> paramPair = getRequestParamValueMap(request);
printAccesslog(traceId, remoteIp, userId, userName, domain, uri, refer, userAgent, paramPair);
}
//打印access log
private void printAccesslog(String traceId, String remoteIp, String userId, String userName,
String domain, String uri, String refer,
String userAgent, Map<String, String> paramPair) {
StringBuilder sb = new StringBuilder();
long timestamp = System.currentTimeMillis();
sb.append(SEPARATOR).append(timestamp);
sb.append(SEPARATOR).append(remoteIp);
sb.append(SEPARATOR).append(userId);
sb.append(SEPARATOR).append(userName);
//RequestId用于定位access log与业务log
sb.append(SEPARATOR).append(traceId);
sb.append(SEPARATOR).append(domain);
sb.append(SEPARATOR).append(uri);
sb.append(SEPARATOR).append(refer);
sb.append(SEPARATOR).append(userAgent);
//将参数map打印成json格式,利于统计分析
sb.append(SEPARATOR).append(JsonUtils.toJson(paramPair));
ACCESS_LOG.info(sb.toString());
}
private String getRemoteAddr(HttpServletRequest request) {
if (LOGGER.isDebugEnabled()) {// 打印所有日志
LOGGER.debug("X-Forwarded-For:" + request.getHeader
("X-Forwarded-For") +
"\tProxy-Client-IP:" + request.getHeader("Proxy-Client-IP") +
"\t:WL-Proxy-Client-IP:" +
request.getHeader("WL-Proxy-Client-IP") + "\tRemoteAddr:" + request
.getRemoteAddr());
}
String ip;
@SuppressWarnings("unchecked")
Enumeration<String> xffs = request.getHeaders("X-Forwarded-For");
if (xffs.hasMoreElements()) {
String xff = xffs.nextElement();
ip = resolveClientIPFromXFF(xff);
if (isValidIP(ip)) {
return ip;
}
}
ip = request.getHeader("Proxy-Client-IP");
if (isValidIP(ip)) {
return ip;
}
ip = request.getHeader("WL-Proxy-Client-IP");
if (isValidIP(ip)) {
return ip;
}
return request.getRemoteAddr();
}
/**
* 从X-Forwarded-For头部中获取客户端的真实IP。 X-Forwarded-For并不是RFC定义的标准HTTP请求Header
* ,可以参考http://en.wikipedia.org/wiki/X-Forwarded-For
*
* @param xff X-Forwarded-For头部的值
* @return 如果能够解析到client IP,则返回表示该IP的字符串,否则返回null
*/
private String resolveClientIPFromXFF(String xff) {
if (xff == null || xff.isEmpty()) {
return null;
}
String[] ss = xff.split(",");
for (int i = ss.length - 1; i >= 0; i--) {// x-forward-for链反向遍历
String ip = ss[i].trim();
if (isValidIP(ip)) {
return ip;
}
}
return null;
}
private static final Pattern ipPattern = Pattern.compile("([0-9]{1,3}\\.){3}[0-9]{1,3}");
private boolean isValidIP(String ip) {
if (ip == null || ip.isEmpty() || "unknown".equalsIgnoreCase(ip)) {
return false;
}
return ipPattern.matcher(ip).matches();
}
private Map<String, String> getRequestParamValueMap(HttpServletRequest request) {
Map<String,String> param2value = new HashMap<>();
Enumeration e = request.getParameterNames();
String param;
while(e.hasMoreElements()) {
param = (String)e.nextElement();
if(param != null) {
String value = request.getParameter(param);
if(value != null) {
param2value.put(param, value);
}
}
}
return param2value;
}
}
最后,需要应用这个拦截器:
@Configuration
@EnableWebMvc
public class ApiWebMvcConfig implements WebMvcConfigurer {
@Bean
public HandlerInterceptor getAccessLogInterceptor(){
return new AccessLogInterceptor();
}
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(getAccessLogInterceptor()).addPathPatterns("/**");
}
}
总结:
以上,就是我这里实践的添加日志的两种的方式。现在公司项目使用的是第二种添加方式,毕竟打印的日志全一点。
网友评论