Reviewed-by: Kacur, Michal <michal.kacur@t-systems.com> Co-authored-by: proposalbot <proposalbot@otc-service.com> Co-committed-by: proposalbot <proposalbot@otc-service.com>
3.3 KiB
- original_name
mrs_01_1457.html
CarbonData FAQ
Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values? <mrs_01_1458>
How to Avoid Minor Compaction for Historical Data? <mrs_01_1459>
How to Change the Default Group Name for CarbonData Data Loading? <mrs_01_1460>
Why Does INSERT INTO CARBON TABLE Command Fail? <mrs_01_1461>
Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters? <mrs_01_1462>
Why Data Load Performance Decreases due to Bad Records? <mrs_01_1463>
Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial Executors Is Zero? <mrs_01_1464>
Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed? <mrs_01_1465>
Why Data loading Fails During off heap? <mrs_01_1466>
Why Do I Fail to Create a Hive Table? <mrs_01_1467>
Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner? <mrs_01_1468>
How Do I Logically Split Data Across Different Namespaces? <mrs_01_1469>
Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases? <mrs_01_1470>
Why the UPDATE Command Cannot Be Executed in Spark Shell? <mrs_01_1471>
How Do I Configure Unsafe Memory in CarbonData? <mrs_01_1472>
Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS? <mrs_01_1473>
Why Does Data Query or Loading Fail and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" Is Displayed? <mrs_01_1474>
why_is_incorrect_output_displayed_when_i_perform_query_with_filter_on_decimal_data_type_values how_to_avoid_minor_compaction_for_historical_data how_to_change_the_default_group_name_for_carbondata_data_loading why_does_insert_into_carbon_table_command_fail why_is_the_data_logged_in_bad_records_different_from_the_original_input_data_with_escape_characters why_data_load_performance_decreases_due_to_bad_records why_insert_into_load_data_task_distribution_is_incorrect_and_the_opened_tasks_are_less_than_the_available_executors_when_the_number_of_initial_executors_is_zero why_does_carbondata_require_additional_executors_even_though_the_parallelism_is_greater_than_the_number_of_blocks_to_be_processed why_data_loading_fails_during_off_heap why_do_i_fail_to_create_a_hive_table why_carbondata_tables_created_in_v100r002c50rc1_not_reflecting_the_privileges_provided_in_hive_privileges_for_non-owner how_do_i_logically_split_data_across_different_namespaces why_missing_privileges_exception_is_reported_when_i_perform_drop_operation_on_databases why_the_update_command_cannot_be_executed_in_spark_shell how_do_i_configure_unsafe_memory_in_carbondata why_exception_occurs_in_carbondata_when_disk_space_quota_is_set_for_storage_directory_in_hdfs why_does_data_query_or_loading_fail_and_org.apache.carbondata.core.memory.memoryexception_not_enough_memory_is_displayed