反馈

作者: 余政彥 | 来源:发表于2019-08-15 06:49 被阅读0次

    reviewer 1 review

    score 5/5

      Overall Rating

        Outstanding

      Comments on Overall Rating

        (1) The answers are good and the corresponding explanations are reasonable.

        (2) The team develop an integrated visual analytics tool to explore the data and

        perform reasoning. The tool supports interactive search, query, and filtering.

        Most impressive details are the saving functions and provided help tips

      All Challenges - Answers to Questions

        The answers are overall acceptable.

        All the data are incorporated, i.e., static and mobile sensor data.

        The uncertainty is calculated by the traditional IQ-based approaches. Other

        anomalies that usually occur in database (missing, negative, extremely large) are

        also considered

        The assessments are well explained, such as why most cars are free of

        contamination risk and why a car (No. 20) is worth keeping an eye on.

        The assessment of the difference between static and streaming data analysis is

        acceptable.

      Review Part 2 - Respond for Grand Challenge and All Mini-Challenges

        The team develop an integrated visual analytics tool to explore the data and

        perform reasoning. Multiple views (animated map, line charts, and uncertainty

        glyphs) are well combined. The tool supports interactive search, query, and

        filtering to help experts perform deep reasoning. Such reasoning process is better

        illustrated in the video than in the submission web page.

    ----------------------------------------------------------------

    reviewer 2 review

    score 4/5

      Overall Rating

        Good

      Comments on Overall Rating

        This team built a reasonable custom dashboard for tackling this problem, and

        managed to get the big picture of what was going on. There are some missed

        opportunities to dig further into the data, but many of these were missed by the

        other submissions I have reviewed. The questions were generally well answered and

        supported by visualization and discussion, though question 4 and 5 strayed a bit.

      All Challenges - Answers to Questions

        In general, this submission did a reasonable job of answering the questions. The

        answers were generally well supported by visualization and an explanation of their

        thoughts. While not uncovering everything that was in the ground truth document,

        they do not deviate significantly from the answers I observed in other

        submissions. They spotted some of the spreading radiation and identified a number

        of anomalous readings from the sensors.

        The approach taken to uncertainty is generally well explained, though they do not

        take a holistic view and think about uncertainty of an area being based on lack of

        readings, instead they appear to think about the reliability of the readings taken

        in a particular area. This leads to a somewhat distorted view where regions with

        few readings are deemed _less_ uncertain because the lack of readings brings the

        variance down.

        Holistic thinking would also have been helpful when considering the behavior of

        the sensors themselves. Since they focused on outliers or anomalies, they failed

        to capture more systemic problems with the sensors, nor did they take the

        opportunity to calibrate them against other sensors (another common feature across

        the submissions I've reviewed). As such, they missed many of the ground truth

        sensor issues.

        The place where this submission is the furthest off is in response to the

        question of how many contaminated cars there are. They list 11, and proceed to

        list off sensors that have high readings. This seems to be a confusion about

        whether or not the contaminated cars have sensors (again, a common mistake).

        Unfortunately, many of the cars they list are sensors that are supposed to be

        suspect.

        For part four, we don't really get a report on the state of the city, we get an

        explanation of the workflow with their tool.

        For question 5, I think the authors didn't fully understand the difference between

        streaming and static analysis. The claim appears to be that in a stream scenario,

        there is no access to historical data, which they feel is very important. I'm not

        sure why there would be no memory to the system -- just no peeking at the future.

      Review Part 2 - Respond for Grand Challenge and All Mini-Challenges

        This team appears to have written a custom visual analytics tool which certainly

        was the primary analytic tool. They designed a dashboard that seems to have given

        them good access to the data. I was thrilled to see that the tool actually

        included a mechanism for annotation, so observations could be recorded and

        returned to. I wish that they could have taken this a little bit further, however,

        and allowed the sensor readings to be edited. I think removing a couple of

        outliers would have improved some of their visualizations.

        There were two choices that I found unfortunate from a visualization perspective.

        First, for the heatmaps of radiation levels, the scale seems to be set dynamically

        based on the current data subset. This makes it nearly impossible to compare

        different times. Also, by the outliers seem to have seriously skewed the

        visualizations, obscuring the general rise in readings.

        I am also not convinced by the pale borders for indicating the level of

        uncertainty. I think that makes the graphs harder to read, making it difficult to

        really see the regions where there are problems. I think they would have done

        better to make another chart, rather than trying to squeeze another variable into

        the existing one.

        Overall, I think the team produced a reasonable dashboard. There were some missed

        opportunities (some more processing on the data, and more flexibility in looking

        at the position of sensors over time stand out), but the big picture seems to have

        been acquired.

    ----------------------------------------------------------------

    reviewer 3 review

    score 3/5

      Overall Rating

        Average

      Comments on Overall Rating

        + clear and understandable submission

        + questions are mostly answered

        + good use of multiple visualizations that focuse on different aspects of the data

        - VA tool is not innovative

        - interactions in the approach are questionable (reselection of timestamps in each

        individual view)

      All Challenges - Answers to Questions

        Strength:

        + Most datatypes are used: timpestamp, latitude, longitude, sensor ID, value

        + Long/Lat is used for individual sensors and for sensors per region

        + good representation of uncertainty and explanation where uncertainty comes from

        + assumptions are well documented, reasonable, and comprihensible

        + questions where to deploy more sensors was very well explained (comprehensive

        and detailed)

        Neutral:

        o discussion of streaming versus static data was ok

        o using the fine grained grid is ok, but using roads to map data might have given

        even better information. for specific cases a coarser grid based on the districts

        would have been good as well to aggregate data.

        Weakness:

        - uncertainty visualization is simplistic and not innovative

        - detection of patterns and anomalous behaviors of sensors

        - mobile sensors with high radiation are not correlated to stationary sensors to

        check if the mobile is contaminated and to calibrate the mobile sensor

        - wrong conclusion about mobile sensors, e.g., constant values of mobile sensors

        might be because these cars are contaminated

        - cause and effect misinterpreted: mobile sensors are not moving because radiation

        decreases, the measured radiation changes because the car is moving

        - just the two main events are detected (earthquake and aftershock), not other

        real patterns are detected

      Review Part 2 - Respond for Grand Challenge and All Mini-Challenges

        the submission and answers are clear, the questions are answered, and the answers

        are supported by visusalizations

        Visual analytics tool:

        - tool is not really innovative but serves the purpose of the data

        - map of data grid is helpful

        - design of interactions is questionable, e.g., timestamp has to be selected in

        each view separately

        - color ramp is from blue to red over green (red-green is generally a bad choice)

        - the max value of each map is different and changes for different timestamps

        - 5 different views showing different aspects of the data, but not really well

        interconnected

        - the menu that is hidden on the left is strange, why not integrate this into the

        approach, there is enough space and the data represented in this view might be

        relevant for other views

    ----------------------------------------------------------------

    reviewer 4 review

    score 3/5

      Overall Rating

        Average

      Comments on Overall Rating

        This submission does a good job of providing an interactive platform for examining

        the data and the capability to share results. They provide clear illustrations by

        placing data on shape files and allowing users to step through time slices to view

        changes in the data. Recognizing limitations in a browser’s ability to display

        every data point at the same time, they provide at-a-glance detection of possible

        anomalies – helping the user hone in to “interesting” data areas. There was a good

        discussion of static vs mobile sensors as well as acknowledgement of the

        differences between static vs streaming data analysis.

        The visualization techniques are not particularly novel though the designed

        dashboard is new. Several of the visualizations are misleading in their depictions

        by displaying the color blue (for 0 radiation) in areas where there are clearly

        higher levels of radiation. In these cases, the author’s conclusions are not

        reflected in the visualization (and this disconnect is not addressed). It would

        have been nice to see deeper trending analysis organized by sensor readings,

        instead of a single scatterplot depicting all sensor readings – the outliers in

        the data set forced the visualization to hide unusual shifts in the data by

        squishing “normally” functioning sensor readings on top of each other at the

        bottom. As a result, things such as: infected vehicles, malfunctioning sensors,

        and contamination spread were much more difficult to identify and wrong

        conclusions were reached.

      All Challenges - Answers to Questions

        This submission is clear in its depiction of analytic results to the questions

        asked along with supporting documentation. They did a good job of stating

        assumptions and definitions up front and remained consistent throughout the

        analysis. While the visualization techniques are not new, they are applied

        effectively to answer questions. For the most part, the visualizations correctly

        supported the analytic conclusions drawn. In some cases, however, the

        visualizations are misleading in terms of depicting radiation levels over time and

        areas of contamination. Because of this, some of the conclusions drawn were not

        correct and affected rationale for questions which built on each other.

        A short discussion was provided regarding static vs streaming data analysis in

        which main differences were mentioned

      Review Part 2 - Respond for Grand Challenge and All Mini-Challenges

        This was a good use of visual analytic techniques to model the data set with the

        resulting dashboard tool being an effective medium for exploration. The

        interactivity of the tool enables good story-telling for data analysis and

        comprehension. All of the data appears to have been explored, however they noted

        that browser limitations would keep the entire set from being cohesively

        visualized. It was clear that they used the visualization tool to answer the

        questions and were able to provide snapshots of supporting visualized evidence.

        Many of the analytical conclusions reached were off-base, which I suspect is due

        to a lack of deeper trend analysis which might have revealed sensor-shift

        characteristics (to include subtle ones – such as on/off ramping, or more vibrant

        ones – such as right after the earthquake) which were not identified.

        Overall, the demonstrated tool could be an effective medium for big data analysis

        – offering multiple views of the data and interactivity for exploration.

    ----------------------------------------------------------------

    相关文章

      网友评论

          本文标题:反馈

          本文链接:https://www.haomeiwen.com/subject/dwxajctx.html