:original_name: mrs_01_1457.html
.. _mrs_01_1457:
CarbonData FAQ
==============
- :ref:`Why Is Incorrect Output Displayed When I Perform Query with Filter on Decimal Data Type Values? `
- :ref:`How to Avoid Minor Compaction for Historical Data? `
- :ref:`How to Change the Default Group Name for CarbonData Data Loading? `
- :ref:`Why Does INSERT INTO CARBON TABLE Command Fail? `
- :ref:`Why Is the Data Logged in Bad Records Different from the Original Input Data with Escape Characters? `
- :ref:`Why Data Load Performance Decreases due to Bad Records? `
- :ref:`Why INSERT INTO/LOAD DATA Task Distribution Is Incorrect and the Opened Tasks Are Less Than the Available Executors when the Number of Initial Executors Is Zero? `
- :ref:`Why Does CarbonData Require Additional Executors Even Though the Parallelism Is Greater Than the Number of Blocks to Be Processed? `
- :ref:`Why Data loading Fails During off heap? `
- :ref:`Why Do I Fail to Create a Hive Table? `
- :ref:`Why CarbonData tables created in V100R002C50RC1 not reflecting the privileges provided in Hive Privileges for non-owner? `
- :ref:`How Do I Logically Split Data Across Different Namespaces? `
- :ref:`Why Missing Privileges Exception is Reported When I Perform Drop Operation on Databases? `
- :ref:`Why the UPDATE Command Cannot Be Executed in Spark Shell? `
- :ref:`How Do I Configure Unsafe Memory in CarbonData? `
- :ref:`Why Exception Occurs in CarbonData When Disk Space Quota is Set for Storage Directory in HDFS? `
- :ref:`Why Does Data Query or Loading Fail and "org.apache.carbondata.core.memory.MemoryException: Not enough memory" Is Displayed? `
.. toctree::
:maxdepth: 1
:hidden:
why_is_incorrect_output_displayed_when_i_perform_query_with_filter_on_decimal_data_type_values
how_to_avoid_minor_compaction_for_historical_data
how_to_change_the_default_group_name_for_carbondata_data_loading
why_does_insert_into_carbon_table_command_fail
why_is_the_data_logged_in_bad_records_different_from_the_original_input_data_with_escape_characters
why_data_load_performance_decreases_due_to_bad_records
why_insert_into_load_data_task_distribution_is_incorrect_and_the_opened_tasks_are_less_than_the_available_executors_when_the_number_of_initial_executors_is_zero
why_does_carbondata_require_additional_executors_even_though_the_parallelism_is_greater_than_the_number_of_blocks_to_be_processed
why_data_loading_fails_during_off_heap
why_do_i_fail_to_create_a_hive_table
why_carbondata_tables_created_in_v100r002c50rc1_not_reflecting_the_privileges_provided_in_hive_privileges_for_non-owner
how_do_i_logically_split_data_across_different_namespaces
why_missing_privileges_exception_is_reported_when_i_perform_drop_operation_on_databases
why_the_update_command_cannot_be_executed_in_spark_shell
how_do_i_configure_unsafe_memory_in_carbondata
why_exception_occurs_in_carbondata_when_disk_space_quota_is_set_for_storage_directory_in_hdfs
why_does_data_query_or_loading_fail_and_org.apache.carbondata.core.memory.memoryexception_not_enough_memory_is_displayed